diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index bd1b009290b9..5fb603a717e4 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,21 +1,18 @@ # Contributing to Terraform -**All communication on GitHub, the community forum, and other HashiCorp-provided communication channels is subject to [the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).** - -This repository contains Terraform core, which includes the command line interface and the main graph engine. +This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository linked from the [Terraform Registry index](https://registry.terraform.io/browse/providers). Instructions for developing each provider are usually in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html). -Providers are implemented as plugins that each have their own repository linked from the [Terraform Registry index](https://registry.terraform.io/browse/providers). Instructions for developing each provider are usually in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html). +**All communication on GitHub, the community forum, and other HashiCorp-provided communication channels is subject to [the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).** -This document provides guidance on Terraform contribution recommended practices. It covers what we're looking for in order to help set expectations and help you get the most out of participation in this project. +This document provides guidance on Terraform contribution recommended practices. It covers what we're looking for in order to help set some expectations and help you get the most out of participation in this project. -To report a bug, an enhancement proposal, or give any other product feedback, please [open a GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose) using the most appropriate issue template. Please fill in all of the information the issue templates request. This will maximize our ability to act on your feedback. +To record a bug report, enhancement proposal, or give any other product feedback, please [open a GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose) using the most appropriate issue template. Please do fill in all of the information the issue templates request, because we've seen from experience that this will maximize the chance that we'll be able to act on your feedback. --- -- [Introduction](#Introduction) -- [Contributing a Pull Request](#contributing-a-pull-request) +- [Contributing Fixes](#contributing-fixes) - [Proposing a Change](#proposing-a-change) - [Caveats & areas of special concern](#caveats--areas-of-special-concern) - [State Storage Backends](#state-storage-backends) @@ -31,31 +28,15 @@ To report a bug, an enhancement proposal, or give any other product feedback, pl -## Introduction - -One of the great things about publicly available source code is that you can dive into the project and help _build the thing_ you believe is missing. It's a wonderful and generous instinct. However, Terraform is a complex tool. Even simple changes can have a serious impact on other areas of the code and it can take some time to become familiar with the effects of even basic changes. The Terraform team is not immune to unintended and sometimes undesirable consequences. We take our work seriously, and appreciate the responsibility of maintaining software for a globally diverse community that relies on Terraform for workflows of all sizes and criticality. - -As a result of Terraform's complexity and high bar for stability, the most straightforward way to help with the Terraform project is to [file a feature request or bug report](https://github.com/hashicorp/terraform/issues/new/choose), following the template to fully express your desired use case. - -If you believe you can also implement the solution for your bug or feature, we request that you first discuss the proposed solution with the core maintainer team. This discussion happens in GitHub, on the issue you created to describe the bug or feature. This discussion gives the core team a chance to explore any missing best practices or unintended consequences of the proposed change. Participating in this discussion and getting the go-ahead from a core maintainer is the only way to ensure your code is reviewed for inclusion with the project. It is also possible that the proposed solution is not workable, and will save you time writing code that will not be used due to unforeseen unintended consequences. Please read the section [Proposing a Change](#proposing-a-change) for the full details on this process. - -(As a side note, this is how we work internally at HashiCorp as well. Changes are proposed internally via an RFC process, in which all impacted teams are able to review the proposed changes and give feedback before any code is written. Written communication of changes via the RFC process is a core pillar of our internal coordination.) - - -## Contributing a Pull Request +## Contributing Fixes -If you are a new contributor to Terraform, or looking to get started committing to the Terraform ecosystem, here are a couple of tips to get started. +It can be tempting to want to dive into a community project and help _build the thing_ you believe you're missing. It's a wonderful and helpful intention. However, Terraform is a complex tool. Many seemingly simple changes can have serious effects on other areas of the code and it can take some time to become familiar with the effects of even basic changes. The Terraform team is not immune to unintended and sometimes undesirable changes. We do take our work seriously, and appreciate the globally diverse community that relies on Terraform for workflows of all sizes and criticality. -First, the easiest way to get started is to make fixes or improvements to the documentation. This can be done completely within GitHub, no need to even clone the project! +As a result of Terraform's complexity and high bar for stability, the most straightforward way to start helping with the Terraform project is to pick an existing bug and [get to work](#terraform-clicore-development-environment). -Beyond documentation improvements, it is easiest to contribute to Terraform on the edges. If you are looking for a good starting place to contribute, finding and resolving issues in the providers is the best first step. These projects have huge breadth of coverage and are always looking for contributors to fix issues that might not otherwise get the attention of a maintainer. - -Closer to home, within the Terraform core repository, working in areas like functions or backends tend to have less harmful unintended interactions with the core of Terraform (but, also, are not currently a high priority to be reviewed, so please discuss any changes with the team before you start.) It gets more difficult to contribute as you get closer to the core functionality (e.g., manipulating the graph and core language features). For these types of changes, please start with the [Proposing a Change](#proposing-a-change) section to understand how we think about managing this process. - -Once you are ready to write code, please see the section [Terraform CLI/Core Development Environment](#terraform-clicore-development-environment) to create your dev environment. Please read the documentation, and don't be afraid to ask questions in our [community forum](https://discuss.hashicorp.com/c/terraform-core/27). - -You may see the `Good First Issue` label on issues in the Terraform repository on GitHub. We use this label to maintain a list of issues for new internal core team members to ramp up the codebase. That said, if you are feeling particularly ambitious, you can follow our process to propose a solution. Other HashiCorp repositories (for example, https://github.com/hashicorp/terraform-provider-aws/) do use the `Good First Issue` to indicate good issues for external contributors to get started. +For new contributors we've labeled a few issues with `Good First Issue` as a nod to issues which will help get you familiar with Terraform development, while also providing an onramp to the codebase itself. +Read the documentation, and don't be afraid to [ask questions](https://discuss.hashicorp.com/c/terraform-core/27). ## Proposing a Change @@ -69,8 +50,7 @@ For large proposals that could entail a significant design phase, we wish to be Most changes will involve updates to the test suite, and changes to Terraform's documentation. The Terraform team can advise on different testing strategies for specific scenarios, and may ask you to revise the specific phrasing of your proposed documentation prose to match better with the standard "voice" of Terraform's documentation. -We cannot always respond promptly to pull requests, particularly if they do not relate to an existing GitHub issue where the Terraform team has already participated and indicated willingness to work on the issue or accept PRs for the proposal. We *are* grateful for all contributions however, and will give feedback on pull requests as soon as we are able. - +This repository is primarily maintained by a small team at HashiCorp along with their other responsibilities, so unfortunately we cannot always respond promptly to pull requests, particularly if they do not relate to an existing GitHub issue where the Terraform team has already participated and indicated willingness to work on the issue or accept PRs for the proposal. We *are* grateful for all contributions however, and will give feedback on pull requests as soon as we're able. ### Caveats & areas of special concern @@ -78,18 +58,13 @@ There are some areas of Terraform which are of special concern to the Terraform #### State Storage Backends -The Terraform team is not merging PRs for new state storage backends. Our priority regarding state storage backends is to find maintainers for existing backends and remove those backends without maintainers. - -Please see the [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file for the status of a given backend. Community members with an interest in a particular backend are welcome to offer to maintain it. +The Terraform team is not merging PRs for new state storage backends at the current time. Our priority regarding state storage backends is to find maintainers for existing backends and remove those backends without maintainers. -In terms of setting expectations, there are three categories of backends in the Terraform repository: backends maintained by the core team (ex.: http); backends maintained by one of HashiCorp's provider teams (e.g. AWS S3, Azure, etc); and backends maintained by third party maintainers (ex.: Postgres, COS). +Please see the [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file for the status of a given backend. Community members with an interest in a particular standard backend are welcome to help maintain it. -* Backends maintained by the core team are unlikely to see accepted contributions. We are triaging incoming pull requests, but these are not highly prioritized against our other work. The smaller and more-contained the change, the more likely it will be reviewed (please see also [Proposing a Change](#proposing-a-change)). - -* Backends maintained by one of HashiCorp's provider teams review contributions irregularly. There is no official commitment, typically once every few months one of the maintainers will review a number of backend PRs relating to their provider. The S3 and Azure backends tend to see the most on-going development. - -* Backends maintained by third-party maintainers are reviewed at the whim and availability of those maintainers. When the maintainer gives a positive code review to the pull request, the core team will do a review and merge the changes. +Currently, merging state storage backends places a significant burden on the Terraform team. The team must set up an environment and cloud service provider account, or a new database/storage/key-value service, in order to build and test remote state storage backends. The time and complexity of doing so prevents us from moving Terraform forward in other ways. +We are working to remove ourselves from the critical path of state storage backends by moving them towards a plugin model. In the meantime, we won't be accepting new remote state backends into Terraform. #### Provisioners @@ -103,8 +78,7 @@ From our [documentation](https://www.terraform.io/docs/provisioners/index.html): The Terraform team is in the process of building a way forward which continues to decrease reliance on provisioners. In the mean time however, as our documentation indicates, they are a tool of last resort. As such expect that PRs and issues for provisioners are not high in priority. -Please see the [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file for the status of a given provisioner. - +Please see the [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file for the status of a given provisioner. Community members with an interest in a particular provisioner are welcome to help maintain it. #### Maintainers @@ -116,7 +90,6 @@ There is no expectation on response time for our maintainers; they may be indisp If an an unmaintained area of code interests you and you'd like to become a maintainer, you may simply make a PR against our [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file with your github handle attached to the approriate area. If there is a maintainer or team of maintainers for that area, please coordinate with them as necessary. - ### Pull Request Lifecycle 1. You are welcome to submit a [draft pull request](https://github.blog/2019-02-14-introducing-draft-pull-requests/) for commentary or review before it is fully completed. It's also a good idea to include specific questions or items you'd like feedback on. @@ -144,7 +117,6 @@ The following checks run when a PR is opened: - Contributor License Agreement (CLA): If this is your first contribution to Terraform you will be asked to sign the CLA. - Tests: tests include unit tests and acceptance tests, and all tests must pass before a PR can be merged. -- Vercel: this is an internal tool that does not run correctly for external contributors. We are aware of this and work around it for external contributions. ---- diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 8ac5c1bd30e6..08b438da4cf5 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -10,7 +10,7 @@ jobs: backport: if: github.event.pull_request.merged runs-on: ubuntu-latest - container: hashicorpdev/backport-assistant:0.3.4@sha256:1fb1e4dde82c28eaf27f4720eaffb2e19d490c8b42df244f834f5a550a703070 + container: hashicorpdev/backport-assistant:0.2.1 steps: - name: Run Backport Assistant run: | diff --git a/CHANGELOG.md b/CHANGELOG.md index 534dd0aa3bab..e3facbc27bbf 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,33 +1,91 @@ -## 1.7.0 (Unreleased) +## 1.6.4 (Unreleased) -UPGRADE NOTES: +BUG FIXES: +* `terraform test`: Fix bug preventing passing sensitive output values from previous run blocks as inputs to future run blocks. ([#34190](https://github.com/hashicorp/terraform/pull/34190)) + +## 1.6.3 (November 1, 2023) + +ENHANCEMENTS: +* backend/s3: Adds the parameter `skip_s3_checksum` to allow users to disable checksum on S3 uploads for compatibility with "S3-compatible" APIs. ([#34127](https://github.com/hashicorp/terraform/pull/34127)) + +## 1.6.2 (October 18, 2023) + +BUG FIXES +* `terraform test`: Fix performance issues when using provisioners within configs being tested. ([#34026](https://github.com/hashicorp/terraform/pull/34026)) +* `terraform test`: Only process and parse relevant variables for each run block. ([#34072](https://github.com/hashicorp/terraform/pull/34072)) +* Fix occasional crash when destroying configurations with variables containing validations. ([#34101](https://github.com/hashicorp/terraform/pull/34101)) +* Fix interoperability issues between v1.6 series and earlier series by removing variable validations from the state file ([#34058](https://github.com/hashicorp/terraform/pull/34058)). +* cloud: Fixes panic when saving state in Terraform Cloud when certain types of API errors are returned ([#34074](https://github.com/hashicorp/terraform/pull/34074)). +* config: Fix crash in conditional statements with certain combinations of unknown values. Improve handling of refined values into the conditional expression results ([#34096](https://github.com/hashicorp/terraform/issues/34096)) +* config: Update HCL to fix bug when decoding objects with optional attributes ([#34108](https://github.com/hashicorp/terraform/issues/34108)) +* backend/s3: Some configurations would require `-reconfigure` during each `init` when config was not decoded correctly ([#34108](https://github.com/hashicorp/terraform/issues/34108)) + +## 1.6.1 (October 10, 2023) -* Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue ([#33770](https://github.com/hashicorp/terraform/issues/33770)) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches: - * Users of Terraform prior to 1.3.0 are unaffected; - * Terraform 1.3 series users should upgrade to 1.3.10; - * Terraform 1.4 series users should upgrade to 1.4.7; - * Terraform 1.5 series users should upgrade to 1.5.7; - * Users of Terraform 1.6.0 and later are unaffected. - - This is important for users with `terraform_remote_state` data sources reading remote state across different versions of Terraform. -* `nonsensitive` function no longer errors when applied to values that are already not sensitive. ([#33856](https://github.com/hashicorp/terraform/issues/33856)) +ENHANCEMENTS: +* backend/s3: The `skip_requesting_account_id` argument supports AWS API implementations that do not have the IAM, STS, or metadata API. ([#34002](https://github.com/hashicorp/terraform/pull/34002)) BUG FIXES: +* config: Using sensitive values as one or both of the results of a conditional expression will no longer crash. ([#33996](https://github.com/hashicorp/terraform/issues/33996)) +* config: Conditional expression returning refined-non-null result will no longer crash. ([#33996](https://github.com/hashicorp/terraform/issues/33996)) +* cli: Reverted back to previous behavior of ignoring signing key expiration for provider installation, since it's the provider registry's responsibility to verify key validity at publication time. ([#34004](https://github.com/hashicorp/terraform/issues/34004)) +* cli: `GIT_SSH_COMMAND` is now preserved again when fetching modules from git source addresses. ([#34045](https://github.com/hashicorp/terraform/issues/34045)) +* cloud: The `TF_WORKSPACE` environment variable works with the `cloud` block again; it can specify a workspace when none is configured, or select an active workspace when the config specifies `tags`. ([#34012](https://github.com/hashicorp/terraform/issues/34012)) +* backend/s3: S3, DynamoDB, IAM, and STS endpoint parameters will no longer fail validation if the parsed scheme or hostname is empty. ([#34017](https://github.com/hashicorp/terraform/pull/34017)) +* backend/s3: Providing a key alias to the `kms_key_id` argument will no longer fail validation. ([#33993](https://github.com/hashicorp/terraform/pull/33993)) -* Ignore potential remote terraform version mismatch when running force-unlock ([#28853](https://github.com/hashicorp/terraform/issues/28853)) -* Exit Dockerfile build script early on `cd` failure. ([#34128](https://github.com/hashicorp/terraform/issues/34128)) +## 1.6.0 (October 4, 2023) + +UPGRADE NOTES: +* On macOS, Terraform now requires macOS 10.15 Catalina or later; support for previous versions has been discontinued. +* On Windows, Terraform now requires at least Windows 10 or Windows Server 2016; support for previous versions has been discontinued. +* The S3 backend has a number of significant changes to its configuration format in this release, intended to match with recent changes in the `hashicorp/aws` provider: + * Configuration settings related to assuming IAM roles now belong to a nested block `assume_role`. The top-level arguments `role_arn`, `session_name`, `external_id`, `assume_role_duration_seconds`, `assume_role_policy_arns`, `assume_role_tags`, and `assume_role_transitive_tag_keys` are all now deprecated in favor of the nested equivalents. ([#30495](https://github.com/hashicorp/terraform/issues/30495)) + * Configuration settings related to overriding the locations of AWS service endpoints used by the provider now belong to a nested block `endpoints`. The top-level arguments `dynamodb_endpoint`, `iam_endpoint`, `endpoint` (fir S3), and `sts_endpoint` are now deprecated in favor of the nested equivalents. ([#30492](https://github.com/hashicorp/terraform/issues/30492)) + * The backend now uses the following environment variables for overriding the default locations of AWS service endpoints used by the provider: `AWS_ENDPOINT_URL_DYNAMODB`, `AWS_ENDPOINT_URL_IAM`, `AWS_ENDPOINT_URL_S3`, and `AWS_ENDPOINT_URL_STS`. The old non-standard names for these environment variables are now deprecated: `AWS_DYNAMODB_ENDPOINT`, `AWS_IAM_ENDPOINT`, `AWS_S3_ENDPOINT`, and `AWS_STS_ENDPOINT`. ([#30479](https://github.com/hashicorp/terraform/issues/30479)) + * The singular `shared_credentials_file` argument is deprecated in favor of the plural `shared_credentials_files`. + * The `force_path_style` argument is deprecated in favor of `use_path_style` for consistency with the AWS SDK. ([#30491](https://github.com/hashicorp/terraform/issues/30491)) + +NEW FEATURES: +* `terraform test`: The `terraform test` command is now generally available. This comes with a significant change to how tests are written and executed, based on feedback from the experimental phase. + + Terraform tests are written in `.tftest.hcl` files, containing a series of `run` blocks. Each `run` block executes a Terraform plan and optional apply against the Terraform configuration under test and can check conditions against the resulting plan and state. ENHANCEMENTS: +* config: The `import` block `id` field now accepts expressions referring to other values such as resource attributes, as long as the value is a string known at plan time. ([#33618](https://github.com/hashicorp/terraform/issues/33618)) +* Terraform Cloud integration: Remote plans on Terraform Cloud/Enterprise can now be saved using the `-out` option, viewed using `terraform show`, and applied using `terraform apply` with the saved plan filename. ([#33492](https://github.com/hashicorp/terraform/issues/33492)) +* config: Terraform can now track some additional detail about values that won't be known until the apply step, such as the range of possible lengths for a collection or whether an unknown value can possibly be null. When this information is available, Terraform can potentially generate known results for some operations on unknown values. This doesn't mean that Terraform can immediately track that detail in all cases, but the type system now supports that and so over time we can improve the level of detail generated by built-in functions, language operators, Terraform providers, etc. ([#33234](https://github.com/hashicorp/terraform/issues/33234)) +* core: Provider schemas can now be cached globally for compatible providers, allowing them to be reused throughout core without requesting them for each new provider instance. This can significantly reduce memory usage when there are many instances of the same provider in a single configuration ([#33482](https://github.com/hashicorp/terraform/pull/33482)) +* config: The `try` and `can` functions can now return more precise and consistent results when faced with unknown arguments ([#33758](https://github.com/hashicorp/terraform/pull/33758)) +* `terraform show -json`: Now includes `errored` property, indicating whether the planning process halted with an error. An errored plan is not applyable. ([#33372](https://github.com/hashicorp/terraform/issues/33372)) +* core: Terraform will now skip requesting the (possibly very large) provider schema from providers which indicate during handshake that they don't require that for correct behavior, in situations where Terraform Core itself does not need the schema. ([#33486](https://github.com/hashicorp/terraform/pull/33486)) +* backend/kubernetes: The Kubernetes backend is no longer limited to storing states below 1MiB in size, and can now scale by splitting state across multiple secrets. ([#29678](https://github.com/hashicorp/terraform/pull/29678)) +* backend/s3: Various improvements for consistency with `hashicorp/aws` provider capabilities: + * `assume_role_with_web_identity` nested block for assuming a role with dynamic credentials such as a JSON Web Token. ([#31244](https://github.com/hashicorp/terraform/issues/31244)) + * Now honors the standard AWS environment variables for credential and configuration files: `AWS_CONFIG_FILE` and `AWS_SHARED_CREDENTIALS_FILE`. ([#30493](https://github.com/hashicorp/terraform/issues/30493)) + * `shared_config_files` and `shared_credentials_files` arguments for specifying credential and configuration files as part of the backend configuration. ([#30493](https://github.com/hashicorp/terraform/issues/30493)) + * Internally the backend now uses AWS SDK for Go v2, which should address various other missing behaviors that are handled by the SDK rather than by Terraform itself. ([#30443](https://github.com/hashicorp/terraform/issues/30443)) + * `custom_ca_bundle` argument and support for the corresponding AWS environment variable, `AWS_CA_BUNDLE`, for providing custom root and intermediate certificates. ([#33689](https://github.com/hashicorp/terraform/issues/33689)) + * `ec2_metadata_service_endpoint` and `ec2_metadata_service_endpoint_mode` arguments and support for the corresponding AWS environment variables, `AWS_EC2_METADATA_SERVICE_ENDPOINT` and `AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE` for setting the EC2 metadata service (IMDS) endpoint. The environment variable `AWS_METADATA_URL` is also supported for compatibility with the AWS provider, but is deprecated. ([#30444](https://github.com/hashicorp/terraform/issues/30444)) + * `http_proxy`, `insecure`, `use_fips_endpoint`, and `use_dualstack_endpoint` arguments and support for the corresponding environment variables, `HTTP_PROXY` and `HTTPS_PROXY`, which enable custom HTTP proxy configurations and the resolution of AWS endpoints with extended capabilities. ([#30496](https://github.com/hashicorp/terraform/issues/30496)) + * `sts_region` argument to use an alternative region for STS operations. ([#33693](https://github.com/hashicorp/terraform/issues/33693)) + * `retry_mode` argument and support for the corresponding `AWS_RETRY_MODE` environment variable to configure how retries are attempted. ([#33692](https://github.com/hashicorp/terraform/issues/33692)) + * `allowed_account_ids` and `forbidden_account_ids` arguments to prevent unintended modifications to specified environments. ([#33688](https://github.com/hashicorp/terraform/issues/33688)) +* backend/cos: Support custom HTTP(S) endpoint and root domain for the API client. ([#33656](https://github.com/hashicorp/terraform/issues/33656)) -* `terraform test`: Providers defined within test files can now reference variables from their configuration that are defined within the test file. ([#34069](https://github.com/hashicorp/terraform/issues/34069)) -* `terraform test`: Providers defined within test files can now reference outputs from run blocks. ([#34118](https://github.com/hashicorp/terraform/issues/34118)) -* `import`: `for_each` can now be used to expand the `import` block to handle multiple resource instances ([#33932](https://github.com/hashicorp/terraform/issues/33932)) +BUG FIXES: +* core: Transitive dependencies were lost during apply when the referenced resource expanded into zero instances. ([#33403](https://github.com/hashicorp/terraform/issues/33403)) +* cli: Terraform will no longer override SSH settings in local git configuration when installing modules. ([#33592](https://github.com/hashicorp/terraform/issues/33592)) +* `terraform` built-in provider: The upstream dependency that Terraform uses for service discovery of Terraform-native services such as Terraform Cloud/Enterprise state storage was previously not concurrency-safe, but Terraform was treating it as if it was in situations like when a configuration has multiple `terraform_remote_state` blocks all using the "remote" backend. Terraform is now using a newer version of that library which updates its internal caches in a concurrency-safe way. ([#33364](https://github.com/hashicorp/terraform/issues/33364)) +* `terraform init`: Terraform will no longer allow downloading remote modules to invalid paths. ([#33745](https://github.com/hashicorp/terraform/issues/33745)) +* Ignore potential remote terraform version mismatch when running force-unlock ([#28853](https://github.com/hashicorp/terraform/issues/28853)) +* cloud: Fixed a bug that would prevent nested symlinks from being dereferenced into the config sent to Terraform Cloud ([#31895](https://github.com/hashicorp/terraform/issues/31895)) +* cloud: state snapshots could not be disabled when header x-terraform-snapshot-interval is absent ([#33820](https://github.com/hashicorp/terraform/pull/33820)) ## Previous Releases For information on prior major and minor releases, see their changelogs: -* [v1.6](https://github.com/hashicorp/terraform/blob/v1.6/CHANGELOG.md) * [v1.5](https://github.com/hashicorp/terraform/blob/v1.5/CHANGELOG.md) * [v1.4](https://github.com/hashicorp/terraform/blob/v1.4/CHANGELOG.md) * [v1.3](https://github.com/hashicorp/terraform/blob/v1.3/CHANGELOG.md) diff --git a/LICENSE b/LICENSE index 9bf36510868b..f10d14f735c5 100644 --- a/LICENSE +++ b/LICENSE @@ -4,7 +4,7 @@ License text copyright (c) 2020 MariaDB Corporation Ab, All Rights Reserved. Parameters Licensor: HashiCorp, Inc. -Licensed Work: Terraform 1.7.0-dev. The Licensed Work is (c) 2023 HashiCorp, Inc. +Licensed Work: Terraform 1.6.4. The Licensed Work is (c) 2023 HashiCorp, Inc. Additional Use Grant: You may make production use of the Licensed Work, provided Your use does not include offering the Licensed Work to third parties on a hosted or embedded basis in order to compete with diff --git a/go.mod b/go.mod index 9ab233db91de..abf84f0c6b63 100644 --- a/go.mod +++ b/go.mod @@ -1,8 +1,8 @@ module github.com/hashicorp/terraform require ( - cloud.google.com/go/kms v1.12.1 - cloud.google.com/go/storage v1.30.1 + cloud.google.com/go/kms v1.10.1 + cloud.google.com/go/storage v1.28.1 github.com/Azure/azure-sdk-for-go v59.2.0+incompatible github.com/Azure/go-autorest/autorest v0.11.27 github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 @@ -46,7 +46,7 @@ require ( github.com/hashicorp/go-multierror v1.1.1 github.com/hashicorp/go-plugin v1.4.3 github.com/hashicorp/go-retryablehttp v0.7.4 - github.com/hashicorp/go-tfe v1.34.0 + github.com/hashicorp/go-tfe v1.36.0 github.com/hashicorp/go-uuid v1.0.3 github.com/hashicorp/go-version v1.6.0 github.com/hashicorp/hcl v1.0.0 @@ -94,15 +94,15 @@ require ( golang.org/x/exp v0.0.0-20230905200255-921286631fa9 golang.org/x/mod v0.12.0 golang.org/x/net v0.17.0 - golang.org/x/oauth2 v0.10.0 + golang.org/x/oauth2 v0.8.0 golang.org/x/sys v0.13.0 golang.org/x/term v0.13.0 golang.org/x/text v0.13.0 golang.org/x/tools v0.13.0 golang.org/x/tools/cmd/cover v0.1.0-deprecated - google.golang.org/api v0.126.0 - google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98 - google.golang.org/grpc v1.58.3 + google.golang.org/api v0.114.0 + google.golang.org/genproto v0.0.0-20230530153820-e85fd2cbaebc + google.golang.org/grpc v1.56.1 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 google.golang.org/protobuf v1.31.0 honnef.co/go/tools v0.5.0-0.dev.0.20230826160118-ad5ca31ff221 @@ -113,10 +113,10 @@ require ( ) require ( - cloud.google.com/go v0.110.4 // indirect - cloud.google.com/go/compute v1.21.0 // indirect + cloud.google.com/go v0.110.0 // indirect + cloud.google.com/go/compute v1.19.1 // indirect cloud.google.com/go/compute/metadata v0.2.3 // indirect - cloud.google.com/go/iam v1.1.1 // indirect + cloud.google.com/go/iam v0.13.0 // indirect github.com/AlecAivazis/survey/v2 v2.3.6 // indirect github.com/Azure/go-autorest v14.2.0+incompatible // indirect github.com/Azure/go-autorest/autorest/adal v0.9.20 // indirect @@ -192,9 +192,8 @@ require ( github.com/google/go-github/v45 v45.2.0 // indirect github.com/google/go-querystring v1.1.0 // indirect github.com/google/gofuzz v1.1.0 // indirect - github.com/google/s2a-go v0.1.4 // indirect github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect - github.com/googleapis/gax-go/v2 v2.11.0 // indirect + github.com/googleapis/gax-go/v2 v2.7.1 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect github.com/hashicorp/go-immutable-radix v1.0.0 // indirect github.com/hashicorp/go-msgpack v0.5.4 // indirect @@ -257,12 +256,12 @@ require ( go.opentelemetry.io/otel/metric v1.19.0 // indirect go.opentelemetry.io/proto/otlp v1.0.0 // indirect golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a // indirect - golang.org/x/sync v0.3.0 // indirect + golang.org/x/sync v0.4.0 // indirect golang.org/x/time v0.3.0 // indirect golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect google.golang.org/appengine v1.6.7 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20230530153820-e85fd2cbaebc // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20230530153820-e85fd2cbaebc // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/ini.v1 v1.66.2 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect diff --git a/go.sum b/go.sum index 76b84207eaac..b1800a3ce3e8 100644 --- a/go.sum +++ b/go.sum @@ -32,8 +32,8 @@ cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w9 cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc= cloud.google.com/go v0.102.1/go.mod h1:XZ77E9qnTEnrgEOvr4xzfdX5TRo7fB4T2F4O6+34hIU= cloud.google.com/go v0.104.0/go.mod h1:OO6xxXdJyvuJPcEPBLN9BJPD+jep5G1+2U5B5gkRYtA= -cloud.google.com/go v0.110.4 h1:1JYyxKMN9hd5dR2MYTPWkGUgcoxVVhg0LKNKEo0qvmk= -cloud.google.com/go v0.110.4/go.mod h1:+EYjdK8e5RME/VY/qLCAtuyALQ9q67dvuum8i+H5xsI= +cloud.google.com/go v0.110.0 h1:Zc8gqp3+a9/Eyph2KDmcGaPtbKRIoqq4YTlL4NMD0Ys= +cloud.google.com/go v0.110.0/go.mod h1:SJnCLqQ0FCFGSZMUNUf84MV3Aia54kn7pi8st7tMzaY= cloud.google.com/go/aiplatform v1.22.0/go.mod h1:ig5Nct50bZlzV6NvKaTwmplLLddFx0YReh9WfTO5jKw= cloud.google.com/go/aiplatform v1.24.0/go.mod h1:67UUvRBKG6GTayHKV8DBv2RtR1t93YRu5B1P3x99mYY= cloud.google.com/go/analytics v0.11.0/go.mod h1:DjEWCu41bVbYcKyvlws9Er60YE4a//bK6mnhWvQeFNI= @@ -70,8 +70,8 @@ cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U= cloud.google.com/go/compute v1.10.0/go.mod h1:ER5CLbMxl90o2jtNbGSbtfOpQKR0t15FOtRsugnLrlU= -cloud.google.com/go/compute v1.21.0 h1:JNBsyXVoOoNJtTQcnEY5uYpZIbeCTYIeDe0Xh1bySMk= -cloud.google.com/go/compute v1.21.0/go.mod h1:4tCnrn48xsqlwSAiLf1HXMQk8CONslYbdiEZc9FEIbM= +cloud.google.com/go/compute v1.19.1 h1:am86mquDUgjGNWxiGn+5PGLbmgiWXlE/yNWpIpNvuXY= +cloud.google.com/go/compute v1.19.1/go.mod h1:6ylj3a05WF8leseCdIf77NK0g1ey+nj5IKd5/kvShxE= cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY= cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA= cloud.google.com/go/containeranalysis v0.5.1/go.mod h1:1D92jd8gRR/c0fGMlymRgxWD3Qw9C1ff6/T7mLgVL8I= @@ -111,14 +111,16 @@ cloud.google.com/go/gkehub v0.10.0/go.mod h1:UIPwxI0DsrpsVoWpLB0stwKCP+WFVG9+y97 cloud.google.com/go/grafeas v0.2.0/go.mod h1:KhxgtF2hb0P191HlY5besjYm6MqTSTj3LSI+M+ByZHc= cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY= cloud.google.com/go/iam v0.5.0/go.mod h1:wPU9Vt0P4UmCux7mqtRu6jcpPAb74cP1fh50J3QpkUc= -cloud.google.com/go/iam v1.1.1 h1:lW7fzj15aVIXYHREOqjRBV9PsH0Z6u8Y46a1YGvQP4Y= -cloud.google.com/go/iam v1.1.1/go.mod h1:A5avdyVL2tCppe4unb0951eI9jreack+RJ0/d+KUZOU= -cloud.google.com/go/kms v1.12.1 h1:xZmZuwy2cwzsocmKDOPu4BL7umg8QXagQx6fKVmf45U= -cloud.google.com/go/kms v1.12.1/go.mod h1:c9J991h5DTl+kg7gi3MYomh12YEENGrf48ee/N/2CDM= +cloud.google.com/go/iam v0.13.0 h1:+CmB+K0J/33d0zSQ9SlFWUeCCEn5XJA0ZMZ3pHE9u8k= +cloud.google.com/go/iam v0.13.0/go.mod h1:ljOg+rcNfzZ5d6f1nAUJ8ZIxOaZUVoS14bKCtaLZ/D0= +cloud.google.com/go/kms v1.10.1 h1:7hm1bRqGCA1GBRQUrp831TwJ9TWhP+tvLuP497CQS2g= +cloud.google.com/go/kms v1.10.1/go.mod h1:rIWk/TryCkR59GMC3YtHtXeLzd634lBbKenvyySAyYI= cloud.google.com/go/language v1.4.0/go.mod h1:F9dRpNFQmJbkaop6g0JhSBXCNlO90e1KWx5iDdxbWic= cloud.google.com/go/language v1.6.0/go.mod h1:6dJ8t3B+lUYfStgls25GusK04NLh3eDLQnWM3mdEbhI= cloud.google.com/go/lifesciences v0.5.0/go.mod h1:3oIKy8ycWGPUyZDR/8RNnTOYevhaMLqh5vLUXs9zvT8= cloud.google.com/go/lifesciences v0.6.0/go.mod h1:ddj6tSX/7BOnhxCSd3ZcETvtNr8NZ6t/iPhY2Tyfu08= +cloud.google.com/go/longrunning v0.4.1 h1:v+yFJOfKC3yZdY6ZUI933pIYdhyhV8S3NpWrXWmg7jM= +cloud.google.com/go/longrunning v0.4.1/go.mod h1:4iWDqhBZ70CvZ6BfETbvam3T8FMvLK+eFj0E6AaRQTo= cloud.google.com/go/mediatranslation v0.5.0/go.mod h1:jGPUhGTybqsPQn91pNXw0xVHfuJ3leR1wj37oU3y1f4= cloud.google.com/go/mediatranslation v0.6.0/go.mod h1:hHdBCTYNigsBxshbznuIMFNe5QXEowAuNmmC7h8pu5w= cloud.google.com/go/memcache v1.4.0/go.mod h1:rTOfiGZtJX1AaFUrOgsMHX5kAzaTQ8azHiuDoTPzNsE= @@ -176,8 +178,8 @@ cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3f cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y= cloud.google.com/go/storage v1.23.0/go.mod h1:vOEEDNFnciUMhBeT6hsJIn3ieU5cFRmzeLgDvXzfIXc= cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi80i/iqR2s= -cloud.google.com/go/storage v1.30.1 h1:uOdMxAs8HExqBlnLtnQyP0YkvbiDpdGShGKtx6U/oNM= -cloud.google.com/go/storage v1.30.1/go.mod h1:NfxhC0UJE1aXSx7CIIbCf7y9HKT7BiccwkR7+P7gN8E= +cloud.google.com/go/storage v1.28.1 h1:F5QDG5ChchaAVQhINh24U99OWHURqrW8OmQcGKXcbgI= +cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y= cloud.google.com/go/talent v1.1.0/go.mod h1:Vl4pt9jiHKvOgF9KoZo6Kob9oV4lwd/ZD5Cto54zDRw= cloud.google.com/go/talent v1.2.0/go.mod h1:MoNF9bhFQbiJ6eFD3uSsg0uBALw4n4gaCaEjBw9zo8g= cloud.google.com/go/videointelligence v1.6.0/go.mod h1:w0DIDlVRKtwPCn/C4iwZIJdvC69yInhW0cfi+p546uU= @@ -615,8 +617,6 @@ github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLe github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/google/s2a-go v0.1.4 h1:1kZ/sQM3srePvKs3tXAvQzo66XfcReoqFpIpIccE7Oc= -github.com/google/s2a-go v0.1.4/go.mod h1:Ej+mSEMGRnqRzjc7VtF+jdBwYG5fuJfiZ8ELkjEwM0A= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= @@ -635,8 +635,8 @@ github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99 github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c= github.com/googleapis/gax-go/v2 v2.5.1/go.mod h1:h6B0KMMFNtI2ddbGJn3T3ZbwkeT6yqEF02fYlzkUCyo= github.com/googleapis/gax-go/v2 v2.6.0/go.mod h1:1mjbznJAPHFpesgE5ucqfYEscaz5kMdcIDwU/6+DDoY= -github.com/googleapis/gax-go/v2 v2.11.0 h1:9V9PWXEsWnPpQhu/PeQIkS4eGzMlTLGgt80cUUI8Ki4= -github.com/googleapis/gax-go/v2 v2.11.0/go.mod h1:DxmR61SGKkGLa2xigwuZIQpkCI2S5iydzRfb3peWZJI= +github.com/googleapis/gax-go/v2 v2.7.1 h1:gF4c0zjUP2H/s/hEGyLA3I0fA2ZWjzYiONAD6cvPr8A= +github.com/googleapis/gax-go/v2 v2.7.1/go.mod h1:4orTrqY6hXxxaUL4LHIPl6lGo8vAE38/qKbhSAKP6QI= github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4= github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g= github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= @@ -701,8 +701,8 @@ github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerX github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc= github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= -github.com/hashicorp/go-tfe v1.34.0 h1:33xcVQ8sbkdPVhubpVH208+S72l6l5bFEi14363yOIE= -github.com/hashicorp/go-tfe v1.34.0/go.mod h1:xA+N8r81ldvS7PC/DVbL+FqGsgfrcu1Ak4+6His+51s= +github.com/hashicorp/go-tfe v1.36.0 h1:Wq73gjjDo/f9gkKQ5MVSb+4NNJ6T7c5MVTivA0s/bZ0= +github.com/hashicorp/go-tfe v1.36.0/go.mod h1:awOuTZ4K9F1EJsKBIoxonJlb7Axn3PIb8YeBLtm/G/0= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= @@ -1124,7 +1124,6 @@ golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5y golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.0.0-20220314234659-1baeb1ce4c0b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= @@ -1262,8 +1261,8 @@ golang.org/x/oauth2 v0.0.0-20220822191816-0ebed06d0094/go.mod h1:h4gKUeWbJ4rQPri golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.1.0/go.mod h1:G9FE4dLTsbXUu90h/Pf85g4w1D+SSAgR+q46nJZ8M4A= -golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8= -golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI= +golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8= +golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -1279,8 +1278,8 @@ golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E= -golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0 h1:zxkM55ReGkDlKSM+Fu41A+zmbZuaPVbGMzvvdUPznYQ= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -1398,7 +1397,6 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= @@ -1528,8 +1526,8 @@ google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ google.golang.org/api v0.97.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s= google.golang.org/api v0.98.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s= google.golang.org/api v0.100.0/go.mod h1:ZE3Z2+ZOr87Rx7dqFsdRQkRBk36kDtp/h+QpHbB7a70= -google.golang.org/api v0.126.0 h1:q4GJq+cAdMAC7XP7njvQ4tvohGLiSlytuL4BQxbIZ+o= -google.golang.org/api v0.126.0/go.mod h1:mBwVAtz+87bEN6CbA1GtZPDOqY2R5ONPqJeIlvyo4Aw= +google.golang.org/api v0.114.0 h1:1xQPji6cO2E2vLiI+C/XiFAnsn1WV3mjaEwGLhi3grE= +google.golang.org/api v0.114.0/go.mod h1:ifYI2ZsFK6/uGddGfAD5BMxlnkBqCmqHSDUVi45N5Yg= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -1644,12 +1642,12 @@ google.golang.org/genproto v0.0.0-20221010155953-15ba04fc1c0e/go.mod h1:3526vdqw google.golang.org/genproto v0.0.0-20221014173430-6e2ab493f96b/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM= google.golang.org/genproto v0.0.0-20221014213838-99cd37c6964a/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM= google.golang.org/genproto v0.0.0-20221025140454-527a21cfbd71/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s= -google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98 h1:Z0hjGZePRE0ZBWotvtrwxFNrNE9CUAGtplaDK5NNI/g= -google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98/go.mod h1:S7mY02OqCJTD0E1OiQy1F72PWFB4bZJ87cAtLPYgDR0= -google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 h1:FmF5cCW94Ij59cfpoLiwTgodWmm60eEV0CjlsVg2fuw= -google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98/go.mod h1:rsr7RhLuwsDKL7RmgDDCUc6yaGr1iqceVb5Wv6f6YvQ= -google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 h1:bVf09lpb+OJbByTj913DRJioFFAjf/ZGxEz7MajTp2U= -google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98/go.mod h1:TUfxEVdsvPg18p6AslUXFoLdpED4oBnGwyqk3dV1XzM= +google.golang.org/genproto v0.0.0-20230530153820-e85fd2cbaebc h1:8DyZCyvI8mE1IdLy/60bS+52xfymkE72wv1asokgtao= +google.golang.org/genproto v0.0.0-20230530153820-e85fd2cbaebc/go.mod h1:xZnkP7mREFX5MORlOPEzLMr+90PPZQ2QWzrVTWfAq64= +google.golang.org/genproto/googleapis/api v0.0.0-20230530153820-e85fd2cbaebc h1:kVKPf/IiYSBWEWtkIn6wZXwWGCnLKcC8oWfZvXjsGnM= +google.golang.org/genproto/googleapis/api v0.0.0-20230530153820-e85fd2cbaebc/go.mod h1:vHYtlOoi6TsQ3Uk2yxR7NI5z8uoV+3pZtR4jmHIkRig= +google.golang.org/genproto/googleapis/rpc v0.0.0-20230530153820-e85fd2cbaebc h1:XSJ8Vk1SWuNr8S18z1NZSziL0CPIXLCCMDOEFtHBOFc= +google.golang.org/genproto/googleapis/rpc v0.0.0-20230530153820-e85fd2cbaebc/go.mod h1:66JfowdXAEgad5O9NnYcsNPLCPZJD++2L9X0PCMODrA= google.golang.org/grpc v1.8.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= @@ -1688,8 +1686,8 @@ google.golang.org/grpc v1.48.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACu google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= google.golang.org/grpc v1.50.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= -google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ= -google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0= +google.golang.org/grpc v1.56.1 h1:z0dNfjIl0VpaZ9iSVjA6daGatAYwPGstTjt5vkRMFkQ= +google.golang.org/grpc v1.56.1/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 h1:M1YKkFIboKNieVO5DLUEVzQfGwJD30Nv2jfUgzb5UcE= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= diff --git a/internal/addrs/module.go b/internal/addrs/module.go index aa309c0f4f7c..0ade0bf9f794 100644 --- a/internal/addrs/module.go +++ b/internal/addrs/module.go @@ -66,14 +66,6 @@ func (m Module) Equal(other Module) bool { return true } -type moduleKey string - -func (m Module) UniqueKey() UniqueKey { - return moduleKey(m.String()) -} - -func (mk moduleKey) uniqueKeySigil() {} - func (m Module) targetableSigil() { // Module is targetable } diff --git a/internal/addrs/module_instance.go b/internal/addrs/module_instance.go index 690e14fd2456..d2a2c2e417f7 100644 --- a/internal/addrs/module_instance.go +++ b/internal/addrs/module_instance.go @@ -504,18 +504,6 @@ func (m ModuleInstance) Module() Module { return ret } -// ContainingModule returns the address of the module instance as if the last -// step wasn't instanced. For example, it turns module.child[0] into -// module.child and module[0].child[0] into module[0].child. -func (m ModuleInstance) ContainingModule() ModuleInstance { - if len(m) == 0 { - return nil - } - - ret := m.Parent() - return ret.Child(m[len(m)-1].Name, NoKey) -} - func (m ModuleInstance) AddrType() TargetableAddrType { return ModuleInstanceAddrType } diff --git a/internal/addrs/module_instance_test.go b/internal/addrs/module_instance_test.go index bcf809e0137e..0014522dc5e4 100644 --- a/internal/addrs/module_instance_test.go +++ b/internal/addrs/module_instance_test.go @@ -164,47 +164,6 @@ func TestModuleInstance_IsDeclaredByCall(t *testing.T) { } } -func TestModuleInstance_ContainingModule(t *testing.T) { - tcs := map[string]struct { - module string - expected string - }{ - "no_instances": { - module: "module.parent.module.child", - expected: "module.parent.module.child", - }, - "last_instance": { - module: "module.parent.module.child[0]", - expected: "module.parent.module.child", - }, - "middle_instance": { - module: "module.parent[0].module.child", - expected: "module.parent[0].module.child", - }, - "all_instances": { - module: "module.parent[0].module.child[0]", - expected: "module.parent[0].module.child", - }, - "single_no_instance": { - module: "module.parent", - expected: "module.parent", - }, - "single_instance": { - module: "module.parent[0]", - expected: "module.parent", - }, - } - for name, tc := range tcs { - t.Run(name, func(t *testing.T) { - module := mustParseModuleInstanceStr(tc.module) - actual, expected := module.ContainingModule().String(), tc.expected - if actual != expected { - t.Errorf("expected: %s\nactual: %s", expected, actual) - } - }) - } -} - func mustParseModuleInstanceStr(str string) ModuleInstance { mi, diags := ParseModuleInstanceStr(str) if diags.HasErrors() { diff --git a/internal/addrs/targetable.go b/internal/addrs/targetable.go index d179d8129e84..2bcf84b80c1e 100644 --- a/internal/addrs/targetable.go +++ b/internal/addrs/targetable.go @@ -6,8 +6,6 @@ package addrs // Targetable is an interface implemented by all address types that can be // used as "targets" for selecting sub-graphs of a graph. type Targetable interface { - UniqueKeyer - targetableSigil() // TargetContains returns true if the receiver is considered to contain diff --git a/internal/backend/local/backend_local.go b/internal/backend/local/backend_local.go index eb598f4c3258..d2bf1277ae56 100644 --- a/internal/backend/local/backend_local.go +++ b/internal/backend/local/backend_local.go @@ -10,7 +10,6 @@ import ( "sort" "strings" - "github.com/hashicorp/hcl/v2" "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/internal/backend" @@ -506,25 +505,3 @@ func (v unparsedUnknownVariableValue) ParseVariableValue(mode configs.VariablePa SourceType: terraform.ValueFromInput, }, nil } - -type unparsedTestVariableValue struct { - Expr hcl.Expression -} - -var _ backend.UnparsedVariableValue = unparsedTestVariableValue{} - -func (v unparsedTestVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - value, valueDiags := v.Expr.Value(nil) - diags = diags.Append(valueDiags) - if valueDiags.HasErrors() { - return nil, diags - } - - return &terraform.InputValue{ - Value: value, - SourceType: terraform.ValueFromConfig, - SourceRange: tfdiags.SourceRangeFromHCL(v.Expr.Range()), - }, diags -} diff --git a/internal/backend/local/test.go b/internal/backend/local/test.go index afe65ebcc578..33c2af6f5b32 100644 --- a/internal/backend/local/test.go +++ b/internal/backend/local/test.go @@ -19,11 +19,10 @@ import ( "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/lang" "github.com/hashicorp/terraform/internal/lang/marks" "github.com/hashicorp/terraform/internal/logging" "github.com/hashicorp/terraform/internal/moduletest" - configtest "github.com/hashicorp/terraform/internal/moduletest/config" - hcltest "github.com/hashicorp/terraform/internal/moduletest/hcl" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/terraform" @@ -63,13 +62,6 @@ type TestSuiteRunner struct { // Verbose tells the runner to print out plan files during each test run. Verbose bool - - // configProviders is a cache of config keys mapped to all the providers - // referenced by the given config. - // - // The config keys are globally unique across an entire test suite, so we - // store this at the suite runner level to get maximum efficiency. - configProviders map[string]map[string]bool } func (runner *TestSuiteRunner) Stop() { @@ -83,9 +75,6 @@ func (runner *TestSuiteRunner) Cancel() { func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - // First thing, initialise the config providers map. - runner.configProviders = make(map[string]map[string]bool) - suite, suiteDiags := runner.collectTests() diags = diags.Append(suiteDiags) if suiteDiags.HasErrors() { @@ -108,14 +97,6 @@ func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { file := suite.Files[name] - priorStates := make(map[string]*terraform.TestContext) - for _, run := range file.Runs { - // Pre-initialise the prior states, so we can easily tell between - // a run block that doesn't exist and a run block that hasn't been - // executed yet. - priorStates[run.Name] = nil - } - fileRunner := &TestFileRunner{ Suite: runner, RelevantStates: map[string]*TestFileState{ @@ -124,7 +105,7 @@ func (runner *TestSuiteRunner) Test() (moduletest.Status, tfdiags.Diagnostics) { State: states.NewState(), }, }, - PriorStates: priorStates, + PriorStates: make(map[string]*terraform.TestContext), } runner.View.File(file, moduletest.Starting) @@ -234,8 +215,6 @@ type TestFileRunner struct { // validate the test assertions, and used when calculating values for // variables within run blocks. PriorStates map[string]*terraform.TestContext - - globalVariables map[string]backend.UnparsedVariableValue } // TestFileState is a helper struct that just maps a run block to the state that @@ -248,9 +227,6 @@ type TestFileState struct { func (runner *TestFileRunner) Test(file *moduletest.File) { log.Printf("[TRACE] TestFileRunner: executing test file %s", file.Name) - // First thing, initialise the global variables for the file - runner.initVariables(file) - // We'll execute the tests in the file. First, mark the overall status as // being skipped. This will ensure that if we've cancelled and the files not // going to do anything it'll be marked as skipped. @@ -360,13 +336,7 @@ func (runner *TestFileRunner) run(run *moduletest.Run, file *moduletest.File, st return state, false } - key := MainStateIdentifier - if run.Config.ConfigUnderTest != nil { - key = run.Config.Module.Source.String() - } - runner.gatherProviders(key, config) - - resetConfig, configDiags := configtest.TransformConfigForTest(config, run, file, runner.globalVariables, runner.PriorStates, runner.Suite.configProviders[key]) + resetConfig, configDiags := config.TransformForTest(run.Config, file.Config) defer resetConfig() run.Diagnostics = run.Diagnostics.Append(configDiags) @@ -389,7 +359,7 @@ func (runner *TestFileRunner) run(run *moduletest.Run, file *moduletest.File, st return state, false } - variables, variableDiags := runner.GetVariables(config, run, references) + variables, variableDiags := runner.GetVariables(config, run, file, references) run.Diagnostics = run.Diagnostics.Append(variableDiags) if variableDiags.HasErrors() { run.Status = moduletest.Error @@ -414,6 +384,12 @@ func (runner *TestFileRunner) run(run *moduletest.Run, file *moduletest.File, st resetVariables := runner.AddVariablesToConfig(config, variables) defer resetVariables() + run.Diagnostics = run.Diagnostics.Append(variableDiags) + if variableDiags.HasErrors() { + run.Status = moduletest.Error + return state, false + } + if runner.Suite.Verbose { schemas, diags := planCtx.Schemas(config, plan.PlannedState) @@ -585,7 +561,7 @@ func (runner *TestFileRunner) destroy(config *configs.Config, state *states.Stat var diags tfdiags.Diagnostics - variables, variableDiags := runner.GetVariables(config, run, nil) + variables, variableDiags := runner.GetVariables(config, run, file, nil) diags = diags.Append(variableDiags) if diags.HasErrors() { @@ -873,7 +849,7 @@ func (runner *TestFileRunner) cleanup(file *moduletest.File) { diags = diags.Append(tfdiags.Sourceless(tfdiags.Error, "Inconsistent state", fmt.Sprintf("Found inconsistent state while cleaning up %s. This is a bug in Terraform - please report it", file.Name))) } } else { - reset, configDiags := configtest.TransformConfigForTest(runner.Suite.Config, main.Run, file, runner.globalVariables, runner.PriorStates, runner.Suite.configProviders[MainStateIdentifier]) + reset, configDiags := runner.Suite.Config.TransformForTest(main.Run.Config, file.Config) diags = diags.Append(configDiags) if !configDiags.HasErrors() { @@ -948,7 +924,7 @@ func (runner *TestFileRunner) cleanup(file *moduletest.File) { var diags tfdiags.Diagnostics - reset, configDiags := configtest.TransformConfigForTest(state.Run.Config.ConfigUnderTest, state.Run, file, runner.globalVariables, runner.PriorStates, runner.Suite.configProviders[state.Run.Config.Module.Source.String()]) + reset, configDiags := state.Run.Config.ConfigUnderTest.TransformForTest(state.Run.Config, file.Config) diags = diags.Append(configDiags) updated := state.State @@ -979,7 +955,7 @@ func (runner *TestFileRunner) cleanup(file *moduletest.File) { // more variables than are required by the config. FilterVariablesToConfig // should be called before trying to use these variables within a Terraform // plan, apply, or destroy operation. -func (runner *TestFileRunner) GetVariables(config *configs.Config, run *moduletest.Run, references []*addrs.Reference) (terraform.InputValues, tfdiags.Diagnostics) { +func (runner *TestFileRunner) GetVariables(config *configs.Config, run *moduletest.Run, file *moduletest.File, references []*addrs.Reference) (terraform.InputValues, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics // relevantVariables contains the variables that are of interest to this @@ -1023,11 +999,11 @@ func (runner *TestFileRunner) GetVariables(config *configs.Config, run *modulete // - Global variables, from the CLI / env vars / .tfvars files. // - File variables, defined within the `variables` block in the file. // - Run variables, defined within the `variables` block in this run. - // - ConfigVariables variables, defined directly within the config. + // - Config variables, defined directly within the config. values := make(terraform.InputValues) // First, let's look at the global variables. - for name, value := range runner.globalVariables { + for name, value := range runner.Suite.GlobalVariables { if !relevantVariables[name] { // Then this run block doesn't need this value. continue @@ -1048,26 +1024,28 @@ func (runner *TestFileRunner) GetVariables(config *configs.Config, run *modulete diags = diags.Append(valueDiags) } - // Second, we'll check the run level variables. + // Second, we'll check the file level variables. + for name, expr := range file.Config.Variables { + if !relevantVariables[name] { + continue + } - // This is a bit more complicated, as the run level variables can reference - // previously defined variables. + value, valueDiags := expr.Value(nil) + diags = diags.Append(valueDiags) - // Preload the available expressions, we're going to validate them when we - // build the context. - var exprs []hcl.Expression - for _, expr := range run.Config.Variables { - exprs = append(exprs, expr) + values[name] = &terraform.InputValue{ + Value: value, + SourceType: terraform.ValueFromConfig, + SourceRange: tfdiags.SourceRangeFromHCL(expr.Range()), + } } - // Preformat the variables we've processed already - these will be made - // available to the eval context. - variables := make(map[string]cty.Value) - for name, value := range values { - variables[name] = value.Value - } + // Third, we'll check the run level variables. - ctx, ctxDiags := hcltest.EvalContext(hcltest.TargetRunBlock, exprs, variables, runner.PriorStates) + // This is a bit more complicated, as the run level variables can reference + // previously defined variables. + + ctx, ctxDiags := runner.ctx(run, file, values) diags = diags.Append(ctxDiags) var failedContext bool @@ -1213,6 +1191,8 @@ func (runner *TestFileRunner) AddVariablesToConfig(config *configs.Config, varia currentVars[name] = variable } + // Next, let's go through our entire inputs and add any that aren't already + // defined into the config. for name, value := range variables { if _, exists := config.Module.Variables[name]; exists { continue @@ -1233,63 +1213,191 @@ func (runner *TestFileRunner) AddVariablesToConfig(config *configs.Config, varia } } -// initVariables initialises the globalVariables within the test runner by -// merging the global variables from the test suite into the variables from -// the file. -func (runner *TestFileRunner) initVariables(file *moduletest.File) { - runner.globalVariables = make(map[string]backend.UnparsedVariableValue) - for name, value := range runner.Suite.GlobalVariables { - runner.globalVariables[name] = value - } - for name, expr := range file.Config.Variables { - runner.globalVariables[name] = unparsedTestVariableValue{expr} - } -} +// EvalCtx returns an hcl.EvalContext that allows the variables blocks within +// run blocks to evaluate references to the outputs from other run blocks. +func (runner *TestFileRunner) ctx(run *moduletest.Run, file *moduletest.File, availableVariables terraform.InputValues) (*hcl.EvalContext, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics -func (runner *TestFileRunner) gatherProviders(key string, config *configs.Config) { - if _, exists := runner.Suite.configProviders[key]; exists { - // Then we've processed this key before, so skip it. - return - } + // First, let's build the set of available run blocks. - providers := make(map[string]bool) + availableRunBlocks := make(map[string]*terraform.TestContext) + runs := make(map[string]cty.Value) + for _, run := range file.Runs { + name := run.Name + + attrs := make(map[string]cty.Value) + if ctx, exists := runner.PriorStates[name]; exists { + // We have executed this run block previously, therefore it is + // available as a reference at this point in time. + availableRunBlocks[name] = ctx + + for name, config := range ctx.Config.Module.Outputs { + output := ctx.State.OutputValue(addrs.AbsOutputValue{ + OutputValue: addrs.OutputValue{ + Name: name, + }, + Module: addrs.RootModuleInstance, + }) - // First, let's look at the required providers first. - for _, provider := range config.Module.ProviderRequirements.RequiredProviders { - providers[provider.Name] = true - for _, alias := range provider.Aliases { - providers[alias.StringCompact()] = true - } - } + var value cty.Value + switch { + case output == nil: + // This means the run block returned null for this output. + // It is likely this will produce an error later if it is + // referenced, but users can actually specify that null + // is an acceptable value for an input variable so we won't + // actually raise a fuss about this at all. + value = cty.NullVal(cty.DynamicPseudoType) + case output.Value.IsNull() || output.Value == cty.NilVal: + // This means the output value was returned as (known after + // apply). If this is referenced it always an error, we + // can't handle this in an appropriate way at all. For now, + // we just mark it as unknown and then later we check and + // resolve all the references. We'll raise an error at that + // point if the user actually attempts to reference a value + // that is unknown. + value = cty.DynamicVal + default: + value = output.Value + } - // Second, we look at the defined provider configs. - for _, provider := range config.Module.ProviderConfigs { - providers[provider.Addr().StringCompact()] = true - } + if config.Sensitive || (output != nil && output.Sensitive) { + value = value.Mark(marks.Sensitive) + } + + attrs[name] = value + } + + runs[name] = cty.ObjectVal(attrs) - // Third, we look at the resources and data sources. - for _, resource := range config.Module.ManagedResources { - if resource.ProviderConfigRef != nil { - providers[resource.ProviderConfigRef.String()] = true continue } - providers[resource.Provider.Type] = true + + // We haven't executed this run block yet, therefore it is not available + // as a reference at this point in time. + availableRunBlocks[name] = nil } - for _, datasource := range config.Module.DataResources { - if datasource.ProviderConfigRef != nil { - providers[datasource.ProviderConfigRef.String()] = true + + // Second, let's build the set of available variables. + + vars := make(map[string]cty.Value) + for name, variable := range availableVariables { + vars[name] = variable.Value + } + + // Third, let's do some basic validation over the references. + + for _, value := range run.Config.Variables { + refs, refDiags := lang.ReferencesInExpr(addrs.ParseRefFromTestingScope, value) + diags = diags.Append(refDiags) + if refDiags.HasErrors() { continue } - providers[datasource.Provider.Type] = true - } - // Finally, we look at any module calls to see if any providers are used - // in there. - for _, module := range config.Module.ModuleCalls { - for _, provider := range module.Providers { - providers[provider.InParent.String()] = true + for _, ref := range refs { + if addr, ok := ref.Subject.(addrs.Run); ok { + ctx, exists := availableRunBlocks[addr.Name] + + if !exists { + // Then this is a made up run block. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Reference to unknown run block", + Detail: fmt.Sprintf("The run block %q does not exist within this test file. You can only reference run blocks that are in the same test file and will execute before the current run block.", addr.Name), + Subject: ref.SourceRange.ToHCL().Ptr(), + }) + + continue + } + + if ctx == nil { + // This run block exists, but it is after the current run block. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Reference to unavailable run block", + Detail: fmt.Sprintf("The run block %q is not available to the current run block. You can only reference run blocks that are in the same test file and will execute before the current run block.", addr.Name), + Subject: ref.SourceRange.ToHCL().Ptr(), + }) + + continue + } + + value, valueDiags := ref.Remaining.TraverseRel(runs[addr.Name]) + diags = diags.Append(valueDiags) + if valueDiags.HasErrors() { + // This means the reference was invalid somehow, we've + // already added the errors to our diagnostics though so + // we'll just carry on. + continue + } + + if !value.IsWhollyKnown() { + // This is not valid, we cannot allow users to pass unknown + // values into run blocks. There's just going to be + // difficult and confusing errors later if this happens. + + if ctx.Run.Config.Command == configs.PlanTestCommand { + // Then the user has likely attempted to use an output + // that is (known after apply) due to the referenced + // run block only being a plan command. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Reference to unknown value", + Detail: fmt.Sprintf("The value for %s is unknown. Run block %q is executing a \"plan\" operation, and the specified output value is only known after apply.", ref.DisplayString(), addr.Name), + Subject: ref.SourceRange.ToHCL().Ptr(), + }) + + continue + } + + // Otherwise, this is a bug in Terraform. We shouldn't be + // producing (known after apply) values during apply + // operations. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Reference to unknown value", + Detail: fmt.Sprintf("The value for %s is unknown; This is a bug in Terraform, please report it.", ref.DisplayString()), + Subject: ref.SourceRange.ToHCL().Ptr(), + }) + } + + continue + } + + if addr, ok := ref.Subject.(addrs.InputVariable); ok { + if _, exists := vars[addr.Name]; !exists { + // This variable reference doesn't exist. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Reference to unavailable variable", + Detail: fmt.Sprintf("The input variable %q is not available to the current run block. You can only reference variables defined at the file or global levels when populating the variables block within a run block.", addr.Name), + Subject: ref.SourceRange.ToHCL().Ptr(), + }) + + continue + } + + // Otherwise, we're good. This is an acceptable reference. + continue + } + + // You can only reference run blocks and variables from the run + // block variables. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid reference", + Detail: "You can only reference earlier run blocks, file level, and global variables while defining variables from inside a run block.", + Subject: ref.SourceRange.ToHCL().Ptr(), + }) } } - runner.Suite.configProviders[key] = providers + // Finally, we can just populate our hcl.EvalContext. + + return &hcl.EvalContext{ + Variables: map[string]cty.Value{ + "run": cty.ObjectVal(runs), + "var": cty.ObjectVal(vars), + }, + }, diags } diff --git a/internal/cloudplugin/binary.go b/internal/cloudplugin/binary.go index 7818d91b4b43..6bf5bbb4582b 100644 --- a/internal/cloudplugin/binary.go +++ b/internal/cloudplugin/binary.go @@ -101,10 +101,10 @@ func (v BinaryManager) Resolve() (*Binary, error) { } // Check if there's a cached binary - if cachedBinary := v.cachedVersion(manifest.Version); cachedBinary != nil { + if cachedBinary := v.cachedVersion(manifest.ProductVersion); cachedBinary != nil { return &Binary{ Path: *cachedBinary, - ProductVersion: manifest.Version, + ProductVersion: manifest.ProductVersion, ResolvedFromCache: true, }, nil } @@ -125,7 +125,7 @@ func (v BinaryManager) Resolve() (*Binary, error) { // Authenticate the archive err = v.verifyCloudPlugin(manifest, buildInfo, t.Name()) if err != nil { - return nil, fmt.Errorf("could not resolve cloudplugin version %q: %w", manifest.Version, err) + return nil, fmt.Errorf("could not resolve cloudplugin version %q: %w", manifest.ProductVersion, err) } // Unarchive @@ -141,14 +141,14 @@ func (v BinaryManager) Resolve() (*Binary, error) { return nil, fmt.Errorf("failed to decompress cloud plugin: %w", err) } - err = os.WriteFile(path.Join(targetPath, ".version"), []byte(manifest.Version), 0644) + err = os.WriteFile(path.Join(targetPath, ".version"), []byte(manifest.ProductVersion), 0644) if err != nil { log.Printf("[ERROR] failed to write .version file to %q: %s", targetPath, err) } return &Binary{ Path: path.Join(targetPath, v.binaryName), - ProductVersion: manifest.Version, + ProductVersion: manifest.ProductVersion, ResolvedFromCache: false, }, nil } @@ -165,33 +165,32 @@ func (v BinaryManager) downloadFileBuffer(pathOrURL string) ([]byte, error) { } // verifyCloudPlugin authenticates the downloaded release archive -func (v BinaryManager) verifyCloudPlugin(archiveManifest *Release, info *BuildArtifact, archiveLocation string) error { - signature, err := v.downloadFileBuffer(archiveManifest.URLSHASumsSignatures[0]) +func (v BinaryManager) verifyCloudPlugin(archiveManifest *Manifest, info *ManifestReleaseBuild, archiveLocation string) error { + signature, err := v.downloadFileBuffer(archiveManifest.SHA256SumsSignatureURL) if err != nil { return fmt.Errorf("failed to download cloudplugin SHA256SUMS signature file: %w", err) } - sums, err := v.downloadFileBuffer(archiveManifest.URLSHASums) + sums, err := v.downloadFileBuffer(archiveManifest.SHA256SumsURL) if err != nil { return fmt.Errorf("failed to download cloudplugin SHA256SUMS file: %w", err) } + reportedSHA, err := releaseauth.SHA256FromHex(info.SHA256Sum) + if err != nil { + return fmt.Errorf("the reported checksum %q is not valid: %w", info.SHA256Sum, err) + } checksums, err := releaseauth.ParseChecksums(sums) if err != nil { return fmt.Errorf("failed to parse cloudplugin SHA256SUMS file: %w", err) } - filename := path.Base(info.URL) - reportedSHA, ok := checksums[filename] - if !ok { - return fmt.Errorf("could not find checksum for file %q", filename) - } - sigAuth := releaseauth.NewSignatureAuthentication(signature, sums) if len(v.signingKey) > 0 { sigAuth.PublicKey = v.signingKey } all := releaseauth.AllAuthenticators( + releaseauth.NewMatchingChecksumsAuthentication(reportedSHA, path.Base(info.URL), checksums), releaseauth.NewChecksumAuthentication(reportedSHA, archiveLocation), sigAuth, ) @@ -199,22 +198,25 @@ func (v BinaryManager) verifyCloudPlugin(archiveManifest *Release, info *BuildAr return all.Authenticate() } -func (v BinaryManager) latestManifest(ctx context.Context) (*Release, error) { +func (v BinaryManager) latestManifest(ctx context.Context) (*Manifest, error) { manifestCacheLocation := path.Join(v.cloudPluginDataDir, v.host.String(), "manifest.json") // Find the manifest cache for the hostname. - data, err := os.ReadFile(manifestCacheLocation) + info, err := os.Stat(manifestCacheLocation) modTime := time.Time{} - var localManifest *Release + var localManifest *Manifest if err != nil { log.Printf("[TRACE] no cloudplugin manifest cache found for host %q", v.host) } else { log.Printf("[TRACE] cloudplugin manifest cache found for host %q", v.host) - - localManifest, err = decodeManifest(bytes.NewBuffer(data)) - modTime = localManifest.TimestampUpdated - if err != nil { - log.Printf("[WARN] failed to decode cloudplugin manifest cache %q: %s", manifestCacheLocation, err) + modTime = info.ModTime() + + data, err := os.ReadFile(manifestCacheLocation) + if err == nil { + localManifest, err = decodeManifest(bytes.NewBuffer(data)) + if err != nil { + log.Printf("[WARN] failed to decode cloudplugin manifest cache %q: %s", manifestCacheLocation, err) + } } } diff --git a/internal/cloudplugin/client.go b/internal/cloudplugin/client.go index eaab4b147b91..54f7e0480961 100644 --- a/internal/cloudplugin/client.go +++ b/internal/cloudplugin/client.go @@ -18,123 +18,26 @@ import ( "github.com/hashicorp/go-retryablehttp" "github.com/hashicorp/terraform/internal/httpclient" "github.com/hashicorp/terraform/internal/logging" - "github.com/hashicorp/terraform/internal/releaseauth" ) var ( defaultRequestTimeout = 60 * time.Second ) -// SHASumsSignatures holds a list of URLs, each referring a detached signature of the release's build artifacts. -type SHASumsSignatures []string - -// BuildArtifact represents a single build artifact in a release response. -type BuildArtifact struct { - - // The hardware architecture of the build artifact - // Enum: [386 all amd64 amd64-lxc arm arm5 arm6 arm64 arm7 armelv5 armhfv6 i686 mips mips64 mipsle ppc64le s390x ui x86_64] - Arch string `json:"arch"` - - // The Operating System corresponding to the build artifact - // Enum: [archlinux centos darwin debian dragonfly freebsd linux netbsd openbsd plan9 python solaris terraform web windows] - Os string `json:"os"` - - // This build is unsupported and provided for convenience only. - Unsupported bool `json:"unsupported,omitempty"` - - // The URL where this build can be downloaded - URL string `json:"url"` -} - -// ReleaseStatus Status of the product release -// Example: {"message":"This release is supported","state":"supported"} -type ReleaseStatus struct { - - // Provides information about the most recent change; must be provided when Name="withdrawn" - Message string `json:"message,omitempty"` - - // The state name of the release - // Enum: [supported unsupported withdrawn] - State string `json:"state"` - - // The timestamp for the creation of the product release status - // Example: 2009-11-10T23:00:00Z - // Format: date-time - TimestampUpdated time.Time `json:"timestamp_updated"` +// ManifestReleaseBuild is the json-encoded details about a particular +// build of terraform-cloudplygin +type ManifestReleaseBuild struct { + URL string `json:"url"` + SHA256Sum string `json:"sha256sum"` } -// Release All metadata for a single product release -type Release struct { - // builds - Builds []*BuildArtifact `json:"builds,omitempty"` - - // A docker image name and tag for this release in the format `name`:`tag` - // Example: consul:1.10.0-beta3 - DockerNameTag string `json:"docker_name_tag,omitempty"` - - // True if and only if this product release is a prerelease. - IsPrerelease bool `json:"is_prerelease"` - - // The license class indicates how this product is licensed. - // Enum: [enterprise hcp oss] - LicenseClass string `json:"license_class"` - - // The product name - // Example: consul-enterprise - // Required: true - Name string `json:"name"` - - // Status - Status ReleaseStatus `json:"status"` - - // Timestamp at which this product release was created. - // Example: 2009-11-10T23:00:00Z - // Format: date-time - TimestampCreated time.Time `json:"timestamp_created"` - - // Timestamp when this product release was most recently updated. - // Example: 2009-11-10T23:00:00Z - // Format: date-time - TimestampUpdated time.Time `json:"timestamp_updated"` - - // URL for a blogpost announcing this release - URLBlogpost string `json:"url_blogpost,omitempty"` - - // URL for the changelog covering this release - URLChangelog string `json:"url_changelog,omitempty"` - - // The project's docker repo on Amazon ECR-Public - URLDockerRegistryDockerhub string `json:"url_docker_registry_dockerhub,omitempty"` - - // The project's docker repo on DockerHub - URLDockerRegistryEcr string `json:"url_docker_registry_ecr,omitempty"` - - // URL for the software license applicable to this release - // Required: true - URLLicense string `json:"url_license,omitempty"` - - // The project's website URL - URLProjectWebsite string `json:"url_project_website,omitempty"` - - // URL for this release's change notes - URLReleaseNotes string `json:"url_release_notes,omitempty"` - - // URL for this release's file containing checksums of all the included build artifacts - URLSHASums string `json:"url_shasums"` - - // An array of URLs, each pointing to a signature file. Each signature file is a detached signature - // of the checksums file (see field `url_shasums`). Signature files may or may not embed the signing - // key ID in the filename. - URLSHASumsSignatures SHASumsSignatures `json:"url_shasums_signatures"` - - // URL for the product's source code repository. This field is empty for - // enterprise and hcp products. - URLSourceRepository string `json:"url_source_repository,omitempty"` - - // The version of this release - // Example: 1.10.0-beta3 - // Required: true - Version string `json:"version"` +// Manifest is the json-encoded manifest details sent by Terraform Cloud +type Manifest struct { + ProductVersion string `json:"plugin_version"` + Archives map[string]ManifestReleaseBuild `json:"archives"` + SHA256SumsURL string `json:"sha256sums_url"` + SHA256SumsSignatureURL string `json:"sha256sums_signature_url"` + lastModified time.Time } // CloudPluginClient fetches and verifies release distributions of the cloudplugin @@ -151,8 +54,8 @@ func requestLogHook(logger retryablehttp.Logger, req *http.Request, i int) { } } -func decodeManifest(data io.Reader) (*Release, error) { - var man Release +func decodeManifest(data io.Reader) (*Manifest, error) { + var man Manifest dec := json.NewDecoder(data) if err := dec.Decode(&man); err != nil { return nil, ErrQueryFailed{ @@ -185,8 +88,8 @@ func NewCloudPluginClient(ctx context.Context, serviceURL *url.URL) (*CloudPlugi // FetchManifest retrieves the cloudplugin manifest from Terraform Cloud, // but returns a nil manifest if a 304 response is received, depending // on the lastModified time. -func (c CloudPluginClient) FetchManifest(lastModified time.Time) (*Release, error) { - req, _ := retryablehttp.NewRequestWithContext(c.ctx, "GET", c.serviceURL.JoinPath("manifest.json").String(), nil) +func (c CloudPluginClient) FetchManifest(lastModified time.Time) (*Manifest, error) { + req, _ := retryablehttp.NewRequestWithContext(c.ctx, "GET", c.serviceURL.JoinPath("manifest").String(), nil) req.Header.Set("If-Modified-Since", lastModified.Format(http.TimeFormat)) resp, err := c.httpClient.Do(req) @@ -202,10 +105,15 @@ func (c CloudPluginClient) FetchManifest(lastModified time.Time) (*Release, erro switch resp.StatusCode { case http.StatusOK: + lastModifiedRaw := resp.Header.Get("Last-Modified") + if len(lastModifiedRaw) > 0 { + lastModified, _ = time.Parse(http.TimeFormat, lastModifiedRaw) + } manifest, err := decodeManifest(resp.Body) if err != nil { return nil, err } + manifest.lastModified = lastModified return manifest, nil case http.StatusNotModified: return nil, nil @@ -272,52 +180,19 @@ func (c CloudPluginClient) resolveManifestURL(pathOrURL string) (*url.URL, error } // Select gets the specific build data from the Manifest for the specified OS/Architecture -func (m Release) Select(goos, arch string) (*BuildArtifact, error) { +func (m Manifest) Select(goos, arch string) (*ManifestReleaseBuild, error) { var supported []string - var found *BuildArtifact - for _, build := range m.Builds { - key := fmt.Sprintf("%s_%s", build.Os, build.Arch) + for key := range m.Archives { supported = append(supported, key) - - if goos == build.Os && arch == build.Arch { - found = build - } } osArchKey := fmt.Sprintf("%s_%s", goos, arch) log.Printf("[TRACE] checking for cloudplugin archive for %s. Supported architectures: %v", osArchKey, supported) - if found == nil { + archiveOSArch, ok := m.Archives[osArchKey] + if !ok { return nil, ErrArchNotSupported } - return found, nil -} - -// PrimarySHASumsSignatureURL returns the URL among the URLSHASumsSignatures that matches -// the public key known by this version of terraform. It falls back to the first URL with no -// ID in the URL. -func (m Release) PrimarySHASumsSignatureURL() (string, error) { - if len(m.URLSHASumsSignatures) == 0 { - return "", fmt.Errorf("no SHA256SUMS URLs were available") - } - - findBySuffix := func(suffix string) string { - for _, url := range m.URLSHASumsSignatures { - if len(url) > len(suffix) && strings.EqualFold(suffix, url[len(url)-len(suffix):]) { - return url - } - } - return "" - } - - withKeyID := findBySuffix(fmt.Sprintf(".%s.sig", releaseauth.HashiCorpPublicKeyID)) - if withKeyID == "" { - withNoKeyID := findBySuffix("_SHA256SUMS.sig") - if withNoKeyID == "" { - return "", fmt.Errorf("no SHA256SUMS URLs matched the known public key") - } - return withNoKeyID, nil - } - return withKeyID, nil + return &archiveOSArch, nil } diff --git a/internal/cloudplugin/client_test.go b/internal/cloudplugin/client_test.go index f8bfad5b12ae..8e73afe622b5 100644 --- a/internal/cloudplugin/client_test.go +++ b/internal/cloudplugin/client_test.go @@ -41,7 +41,7 @@ func TestCloudPluginClient_DownloadFile(t *testing.T) { t.Run("200 response", func(t *testing.T) { buffer := bytes.Buffer{} - err := client.DownloadFile("/archives/terraform-cloudplugin_0.1.0_SHA256SUMS", &buffer) + err := client.DownloadFile("/archives/terraform-cloudplugin/terraform-cloudplugin_0.1.0_SHA256SUMS", &buffer) if err != nil { t.Fatal("expected no error") } @@ -83,8 +83,12 @@ func TestCloudPluginClient_FetchManifest(t *testing.T) { t.Fatal("expected manifest") } - if expected := "0.1.0"; manifest.Version != expected { - t.Errorf("expected ProductVersion %q, got %q", expected, manifest.Version) + if manifest.lastModified != testManifestLastModified { + t.Errorf("expected lastModified %q, got %q", manifest.lastModified, testManifestLastModified) + } + + if expected := "0.1.0"; manifest.ProductVersion != expected { + t.Errorf("expected ProductVersion %q, got %q", expected, manifest.ProductVersion) } }) @@ -120,60 +124,3 @@ func TestCloudPluginClient_NotSupportedByTerraformCloud(t *testing.T) { t.Errorf("Expected ErrCloudPluginNotSupported, got %v", err) } } - -func TestRelease_PrimarySHASumsSignatureURL(t *testing.T) { - example := Release{ - URLSHASumsSignatures: []string{ - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS.sig", - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS/72D7468F.sig", // Not quite right - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS.72D7468F.sig", - }, - } - - url, err := example.PrimarySHASumsSignatureURL() - if err != nil { - t.Fatalf("Expected no error, got %s", err) - } - - if url != example.URLSHASumsSignatures[2] { - t.Errorf("Expected URL %q, but got %q", example.URLSHASumsSignatures[2], url) - } -} - -func TestRelease_PrimarySHASumsSignatureURL_lowercase_should_match(t *testing.T) { - example := Release{ - URLSHASumsSignatures: []string{ - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS.sig", - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS.72d7468f.sig", - }, - } - - url, err := example.PrimarySHASumsSignatureURL() - if err != nil { - t.Fatalf("Expected no error, got %s", err) - } - - // Not expected but technically fine since these are hex values - if url != example.URLSHASumsSignatures[1] { - t.Errorf("Expected URL %q, but got %q", example.URLSHASumsSignatures[1], url) - } -} - -func TestRelease_PrimarySHASumsSignatureURL_no_known_keys(t *testing.T) { - example := Release{ - URLSHASumsSignatures: []string{ - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS.sig", - "https://releases.hashicorp.com/terraform-cloudplugin/0.1.0-prototype/terraform-cloudplugin_0.1.0-prototype_SHA256SUMS.ABCDEF012.sig", - }, - } - - url, err := example.PrimarySHASumsSignatureURL() - if err != nil { - t.Fatalf("Expected no error, got %s", err) - } - - // Returns key with no ID - if url != example.URLSHASumsSignatures[0] { - t.Errorf("Expected URL %q, but got %q", example.URLSHASumsSignatures[0], url) - } -} diff --git a/internal/cloudplugin/testing.go b/internal/cloudplugin/testing.go index 84007a4107e3..cafe68827981 100644 --- a/internal/cloudplugin/testing.go +++ b/internal/cloudplugin/testing.go @@ -9,42 +9,25 @@ import ( "net/http" "net/http/httptest" "os" + "path" "testing" "time" ) var testManifest = `{ - "builds": [ - { - "arch": "amd64", - "os": "darwin", - "url": "/archives/terraform-cloudplugin_0.1.0_darwin_amd64.zip" - } - ], - "is_prerelease": true, - "license_class": "ent", - "name": "terraform-cloudplugin", - "status": { - "state": "supported", - "timestamp_updated": "2023-07-31T15:18:20.243Z" - }, - "timestamp_created": "2023-07-31T15:18:20.243Z", - "timestamp_updated": "2023-07-31T15:18:20.243Z", - "url_changelog": "https://github.com/hashicorp/terraform-cloudplugin/blob/main/CHANGELOG.md", - "url_license": "https://github.com/hashicorp/terraform-cloudplugin/blob/main/LICENSE", - "url_project_website": "https://www.terraform.io/", - "url_shasums": "/archives/terraform-cloudplugin_0.1.0_SHA256SUMS", - "url_shasums_signatures": [ - "/archives/terraform-cloudplugin_0.1.0_SHA256SUMS.sig", - "/archives/terraform-cloudplugin_0.1.0_SHA256SUMS.72D7468F.sig" - ], - "url_source_repository": "https://github.com/hashicorp/terraform-cloudplugin", - "version": "0.1.0" + "plugin_version": "0.1.0", + "archives": { + "darwin_amd64": { + "url": "/archives/terraform-cloudplugin/terraform-cloudplugin_0.1.0_darwin_amd64.zip", + "sha256sum": "22db2f0c70b50cff42afd4878fea9f6848a63f1b6532bd8b64b899f574acb35d" + } + }, + "sha256sums_url": "/archives/terraform-cloudplugin/terraform-cloudplugin_0.1.0_SHA256SUMS", + "sha256sums_signature_url": "/archives/terraform-cloudplugin/terraform-cloudplugin_0.1.0_SHA256SUMS.sig" }` var ( - // This is the same as timestamp_updated above - testManifestLastModified, _ = time.Parse(time.RFC3339, "2023-07-31T15:18:20Z") + testManifestLastModified = time.Date(2023, time.August, 1, 0, 0, 0, 0, time.UTC) ) type testHTTPHandler struct { @@ -57,7 +40,7 @@ func (h *testHTTPHandler) Handle(w http.ResponseWriter, r *http.Request) { } switch r.URL.Path { - case "/api/cloudplugin/v1/manifest.json": + case "/api/cloudplugin/v1/manifest": ifModifiedSince, _ := time.Parse(http.TimeFormat, r.Header.Get("If-Modified-Since")) w.Header().Set("Last-Modified", testManifestLastModified.Format(http.TimeFormat)) @@ -67,7 +50,8 @@ func (h *testHTTPHandler) Handle(w http.ResponseWriter, r *http.Request) { w.Write([]byte(testManifest)) } default: - fileToSend, err := os.Open(fmt.Sprintf("testdata/%s", r.URL.Path)) + baseName := path.Base(r.URL.Path) + fileToSend, err := os.Open(fmt.Sprintf("testdata/archives/%s", baseName)) if err == nil { io.Copy(w, fileToSend) return diff --git a/internal/command/import.go b/internal/command/import.go index a5ff57bcbb8f..fc61671f9ec4 100644 --- a/internal/command/import.go +++ b/internal/command/import.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hclsyntax" + "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/backend" @@ -235,8 +236,11 @@ func (c *ImportCommand) Run(args []string) int { newState, importDiags := lr.Core.Import(lr.Config, lr.InputState, &terraform.ImportOpts{ Targets: []*terraform.ImportTarget{ { - LegacyAddr: addr, - IDString: args[1], + Addr: addr, + + // In the import block, the ID can be an arbitrary hcl.Expression, + // but here it's always interpreted as a literal string. + ID: hcl.StaticExpr(cty.StringVal(args[1]), configs.SynthBody("import", nil).MissingItemRange()), }, }, diff --git a/internal/command/test_test.go b/internal/command/test_test.go index ed713e85b926..62a6be70815f 100644 --- a/internal/command/test_test.go +++ b/internal/command/test_test.go @@ -1146,12 +1146,8 @@ is declared in run block "test". run "finalise"... skip main.tftest.hcl... tearing down main.tftest.hcl... fail -providers.tftest.hcl... in progress - run "test"... fail -providers.tftest.hcl... tearing down -providers.tftest.hcl... fail -Failure! 1 passed, 2 failed, 1 skipped. +Failure! 1 passed, 1 failed, 1 skipped. ` actualOut := output.Stdout() if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { @@ -1173,9 +1169,9 @@ Error: Reference to unavailable run block on main.tftest.hcl line 16, in run "test": 16: input_two = run.finalise.response -The run block "finalise" has not executed yet. You can only reference run -blocks that are in the same test file and will execute before the current run -block. +The run block "finalise" is not available to the current run block. You can +only reference run blocks that are in the same test file and will execute +before the current run block. Error: Reference to unknown run block @@ -1185,15 +1181,6 @@ Error: Reference to unknown run block The run block "madeup" does not exist within this test file. You can only reference run blocks that are in the same test file and will execute before the current run block. - -Error: Reference to unavailable variable - - on providers.tftest.hcl line 3, in provider "test": - 3: resource_prefix = var.default - -The input variable "default" is not available to the current run block. You -can only reference variables defined at the file or global levels when -populating the variables block within a run block. ` actualErr := output.Stderr() if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { @@ -1205,98 +1192,6 @@ populating the variables block within a run block. } } -func TestTest_UndefinedVariables(t *testing.T) { - td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "variables_undefined_in_config")), td) - defer testChdir(t, td)() - - provider := testing_command.NewProvider(nil) - view, done := testView(t) - - c := &TestCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - View: view, - }, - } - - code := c.Run([]string{"-no-color"}) - output := done(t) - - if code == 0 { - t.Errorf("expected status code 0 but got %d", code) - } - - expectedOut := `main.tftest.hcl... in progress - run "test"... fail -main.tftest.hcl... tearing down -main.tftest.hcl... fail - -Failure! 0 passed, 1 failed. -` - actualOut := output.Stdout() - if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedOut, actualOut, diff) - } - - expectedErr := ` -Error: Reference to undeclared input variable - - on main.tf line 2, in resource "test_resource" "foo": - 2: value = var.input - -An input variable with the name "input" has not been declared. This variable -can be declared with a variable "input" {} block. -` - actualErr := output.Stderr() - if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, actualErr, diff) - } - - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) - } -} - -func TestTest_VariablesInProviders(t *testing.T) { - td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "provider_vars")), td) - defer testChdir(t, td)() - - provider := testing_command.NewProvider(nil) - view, done := testView(t) - - c := &TestCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - View: view, - }, - } - - code := c.Run([]string{"-no-color"}) - output := done(t) - - if code != 0 { - t.Errorf("expected status code 0 but got %d", code) - } - - expected := `main.tftest.hcl... in progress - run "test"... pass -main.tftest.hcl... tearing down -main.tftest.hcl... pass - -Success! 1 passed, 0 failed. -` - actual := output.All() - if diff := cmp.Diff(actual, expected); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) - } - - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) - } -} - func TestTest_ExpectedFailuresDuringPlanning(t *testing.T) { td := t.TempDir() testCopyDir(t, testFixturePath(path.Join("test", "expected_failures_during_planning")), td) @@ -1777,156 +1672,3 @@ func TestTest_LongRunningTestJSON(t *testing.T) { t.Errorf("unexpected output\n\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", strings.Join(expected, "\n"), strings.Join(messages, "\n"), diff) } } - -func TestTest_RunBlocksInProviders(t *testing.T) { - td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "provider_runs")), td) - defer testChdir(t, td)() - - provider := testing_command.NewProvider(nil) - - providerSource, close := newMockProviderSource(t, map[string][]string{ - "test": {"1.0.0"}, - }) - defer close() - - streams, done := terminal.StreamsForTesting(t) - view := views.NewView(streams) - ui := new(cli.MockUi) - - meta := Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - Ui: ui, - View: view, - Streams: streams, - ProviderSource: providerSource, - } - - init := &InitCommand{ - Meta: meta, - } - - if code := init.Run(nil); code != 0 { - t.Fatalf("expected status code 0 but got %d: %s", code, ui.ErrorWriter) - } - - test := &TestCommand{ - Meta: meta, - } - - code := test.Run([]string{"-no-color"}) - output := done(t) - - if code != 0 { - t.Errorf("expected status code 0 but got %d", code) - } - - expected := `main.tftest.hcl... in progress - run "setup"... pass - run "main"... pass -main.tftest.hcl... tearing down -main.tftest.hcl... pass - -Success! 2 passed, 0 failed. -` - actual := output.All() - if diff := cmp.Diff(actual, expected); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expected, actual, diff) - } - - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) - } -} - -func TestTest_RunBlocksInProviders_BadReferences(t *testing.T) { - td := t.TempDir() - testCopyDir(t, testFixturePath(path.Join("test", "provider_runs_invalid")), td) - defer testChdir(t, td)() - - provider := testing_command.NewProvider(nil) - - providerSource, close := newMockProviderSource(t, map[string][]string{ - "test": {"1.0.0"}, - }) - defer close() - - streams, done := terminal.StreamsForTesting(t) - view := views.NewView(streams) - ui := new(cli.MockUi) - - meta := Meta{ - testingOverrides: metaOverridesForProvider(provider.Provider), - Ui: ui, - View: view, - Streams: streams, - ProviderSource: providerSource, - } - - init := &InitCommand{ - Meta: meta, - } - - if code := init.Run(nil); code != 0 { - t.Fatalf("expected status code 0 but got %d: %s", code, ui.ErrorWriter) - } - - test := &TestCommand{ - Meta: meta, - } - - code := test.Run([]string{"-no-color"}) - output := done(t) - - if code != 1 { - t.Errorf("expected status code 1 but got %d", code) - } - - expectedOut := `missing_run_block.tftest.hcl... in progress - run "main"... fail -missing_run_block.tftest.hcl... tearing down -missing_run_block.tftest.hcl... fail -unavailable_run_block.tftest.hcl... in progress - run "main"... fail -unavailable_run_block.tftest.hcl... tearing down -unavailable_run_block.tftest.hcl... fail -unused_provider.tftest.hcl... in progress - run "main"... pass -unused_provider.tftest.hcl... tearing down -unused_provider.tftest.hcl... pass - -Failure! 1 passed, 2 failed. -` - actualOut := output.Stdout() - if diff := cmp.Diff(actualOut, expectedOut); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedOut, actualOut, diff) - } - - expectedErr := ` -Error: Reference to unknown run block - - on missing_run_block.tftest.hcl line 2, in provider "test": - 2: resource_prefix = run.missing.resource_directory - -The run block "missing" does not exist within this test file. You can only -reference run blocks that are in the same test file and will execute before -the provider is required. - -Error: Reference to unavailable run block - - on unavailable_run_block.tftest.hcl line 2, in provider "test": - 2: resource_prefix = run.main.resource_directory - -The run block "main" has not executed yet. You can only reference run blocks -that are in the same test file and will execute before the provider is -required. -` - actualErr := output.Stderr() - if diff := cmp.Diff(actualErr, expectedErr); len(diff) > 0 { - t.Errorf("output didn't match expected:\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", expectedErr, actualErr, diff) - } - - if provider.ResourceCount() > 0 { - t.Errorf("should have deleted all resources on completion but left %v", provider.ResourceString()) - } -} diff --git a/internal/command/testdata/test/bad-references/providers.tftest.hcl b/internal/command/testdata/test/bad-references/providers.tftest.hcl deleted file mode 100644 index ee784c12a491..000000000000 --- a/internal/command/testdata/test/bad-references/providers.tftest.hcl +++ /dev/null @@ -1,6 +0,0 @@ - -provider "test" { - resource_prefix = var.default -} - -run "test" {} diff --git a/internal/command/testdata/test/provider_runs/main.tf b/internal/command/testdata/test/provider_runs/main.tf deleted file mode 100644 index 50a07d62f0a8..000000000000 --- a/internal/command/testdata/test/provider_runs/main.tf +++ /dev/null @@ -1 +0,0 @@ -resource "test_resource" "foo" {} diff --git a/internal/command/testdata/test/provider_runs/main.tftest.hcl b/internal/command/testdata/test/provider_runs/main.tftest.hcl deleted file mode 100644 index 2dd8c536635f..000000000000 --- a/internal/command/testdata/test/provider_runs/main.tftest.hcl +++ /dev/null @@ -1,24 +0,0 @@ -variables { - resource_directory = "resources" -} - -provider "test" { - alias = "setup" - resource_prefix = var.resource_directory -} - -run "setup" { - module { - source = "./setup" - } - - providers = { - test = test.setup - } -} - -provider "test" { - resource_prefix = run.setup.resource_directory -} - -run "main" {} diff --git a/internal/command/testdata/test/provider_runs/setup/main.tf b/internal/command/testdata/test/provider_runs/setup/main.tf deleted file mode 100644 index 25d3e57e9633..000000000000 --- a/internal/command/testdata/test/provider_runs/setup/main.tf +++ /dev/null @@ -1,11 +0,0 @@ -variable "resource_directory" { - type = string -} - -resource "test_resource" "foo" { - value = var.resource_directory -} - -output "resource_directory" { - value = test_resource.foo.value -} diff --git a/internal/command/testdata/test/provider_runs_invalid/main.tf b/internal/command/testdata/test/provider_runs_invalid/main.tf deleted file mode 100644 index 25d3e57e9633..000000000000 --- a/internal/command/testdata/test/provider_runs_invalid/main.tf +++ /dev/null @@ -1,11 +0,0 @@ -variable "resource_directory" { - type = string -} - -resource "test_resource" "foo" { - value = var.resource_directory -} - -output "resource_directory" { - value = test_resource.foo.value -} diff --git a/internal/command/testdata/test/provider_runs_invalid/missing_run_block.tftest.hcl b/internal/command/testdata/test/provider_runs_invalid/missing_run_block.tftest.hcl deleted file mode 100644 index bd88563d1739..000000000000 --- a/internal/command/testdata/test/provider_runs_invalid/missing_run_block.tftest.hcl +++ /dev/null @@ -1,9 +0,0 @@ -provider "test" { - resource_prefix = run.missing.resource_directory -} - -run "main" { - variables { - resource_directory = "resource" - } -} diff --git a/internal/command/testdata/test/provider_runs_invalid/unavailable_run_block.tftest.hcl b/internal/command/testdata/test/provider_runs_invalid/unavailable_run_block.tftest.hcl deleted file mode 100644 index ec39cd278a5d..000000000000 --- a/internal/command/testdata/test/provider_runs_invalid/unavailable_run_block.tftest.hcl +++ /dev/null @@ -1,9 +0,0 @@ -provider "test" { - resource_prefix = run.main.resource_directory -} - -run "main" { - variables { - resource_directory = "resource" - } -} diff --git a/internal/command/testdata/test/provider_runs_invalid/unused_provider.tftest.hcl b/internal/command/testdata/test/provider_runs_invalid/unused_provider.tftest.hcl deleted file mode 100644 index c0a4dec3bf72..000000000000 --- a/internal/command/testdata/test/provider_runs_invalid/unused_provider.tftest.hcl +++ /dev/null @@ -1,17 +0,0 @@ -provider "test" { - resource_prefix = run.main.resource_directory -} - -provider "test" { - alias = "usable" -} - -run "main" { - providers = { - test = test.usable - } - - variables { - resource_directory = "resource" - } -} diff --git a/internal/command/testdata/test/provider_vars/main.tf b/internal/command/testdata/test/provider_vars/main.tf deleted file mode 100644 index 8968be12da31..000000000000 --- a/internal/command/testdata/test/provider_vars/main.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - test = { - source = "hashicorp/test" - } - } -} - -resource "test_resource" "foo" {} diff --git a/internal/command/testdata/test/provider_vars/main.tftest.hcl b/internal/command/testdata/test/provider_vars/main.tftest.hcl deleted file mode 100644 index 2f06972d60fe..000000000000 --- a/internal/command/testdata/test/provider_vars/main.tftest.hcl +++ /dev/null @@ -1,10 +0,0 @@ - -variables { - resource_directory = "my-resource-dir" -} - -provider "test" { - resource_prefix = var.resource_directory -} - -run "test" {} diff --git a/internal/command/testdata/test/variables_undefined_in_config/main.tf b/internal/command/testdata/test/variables_undefined_in_config/main.tf deleted file mode 100644 index 2160d8edec62..000000000000 --- a/internal/command/testdata/test/variables_undefined_in_config/main.tf +++ /dev/null @@ -1,3 +0,0 @@ -resource "test_resource" "foo" { - value = var.input -} diff --git a/internal/command/testdata/test/variables_undefined_in_config/main.tftest.hcl b/internal/command/testdata/test/variables_undefined_in_config/main.tftest.hcl deleted file mode 100644 index 144f85601aba..000000000000 --- a/internal/command/testdata/test/variables_undefined_in_config/main.tftest.hcl +++ /dev/null @@ -1,10 +0,0 @@ -run "test" { - variables { - input = "value" - } - - assert { - condition = test_resource.foo.value == "value" - error_message = "bad value" - } -} diff --git a/internal/command/testdata/validate-invalid/duplicate_import_targets/output.json b/internal/command/testdata/validate-invalid/duplicate_import_targets/output.json index 091b5d82748b..bad183b0c9c3 100644 --- a/internal/command/testdata/validate-invalid/duplicate_import_targets/output.json +++ b/internal/command/testdata/validate-invalid/duplicate_import_targets/output.json @@ -11,22 +11,22 @@ "range": { "filename": "testdata/validate-invalid/duplicate_import_targets/main.tf", "start": { - "line": 10, - "column": 8, - "byte": 101 + "line": 9, + "column": 1, + "byte": 85 }, "end": { - "line": 10, - "column": 24, - "byte": 117 + "line": 9, + "column": 7, + "byte": 91 } }, "snippet": { - "context": "import", - "code": " to = aws_instance.web", - "start_line": 10, - "highlight_start_offset": 7, - "highlight_end_offset": 23, + "context": null, + "code": "import {", + "start_line": 9, + "highlight_start_offset": 0, + "highlight_end_offset": 6, "values": [] } } diff --git a/internal/configs/config.go b/internal/configs/config.go index 7cbb911e7710..63fb34639fc8 100644 --- a/internal/configs/config.go +++ b/internal/configs/config.go @@ -429,9 +429,22 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse } reqs[fqn] = nil } + for _, i := range c.Module.Import { + implied, err := addrs.ParseProviderPart(i.To.Resource.Resource.ImpliedProvider()) + if err == nil { + provider := c.Module.ImpliedProviderForUnqualifiedType(implied) + if _, exists := reqs[provider]; exists { + // Explicit dependency already present + continue + } + reqs[provider] = nil + } + // We don't return a diagnostic here, because the invalid address will + // have been caught elsewhere. + } - // Import blocks that are generating config may have a custom provider - // meta-argument. Like the provider meta-argument used in resource blocks, + // Import blocks that are generating config may also have a custom provider + // meta argument. Like the provider meta argument used in resource blocks, // we use this opportunity to load any implicit providers. // // We'll also use this to validate that import blocks and targeted resource @@ -439,14 +452,14 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse // this will be because the user has written explicit provider arguments // that don't agree and we'll get them to fix it. for _, i := range c.Module.Import { - if len(i.ToResource.Module) > 0 { + if len(i.To.Module) > 0 { // All provider information for imports into modules should come // from the module block, so we don't need to load anything for // import targets within modules. continue } - if target, exists := c.Module.ManagedResources[i.ToResource.Resource.String()]; exists { + if target, exists := c.Module.ManagedResources[i.To.String()]; exists { // This means the information about the provider for this import // should come from the resource block itself and not the import // block. @@ -854,3 +867,94 @@ func (c *Config) CheckCoreVersionRequirements() hcl.Diagnostics { return diags } + +// TransformForTest prepares the config to execute the given test. +// +// This function directly edits the config that is to be tested, and returns a +// function that will reset the config back to its original state. +// +// Tests will call this before they execute, and then call the deferred function +// to reset the config before the next test. +func (c *Config) TransformForTest(run *TestRun, file *TestFile) (func(), hcl.Diagnostics) { + var diags hcl.Diagnostics + + // Currently, we only need to override the provider settings. + // + // We can have a set of providers defined within the config, we can also + // have a set of providers defined within the test file. Then the run can + // also specify a set of overrides that tell Terraform exactly which + // providers from the test file to apply into the config. + // + // The process here is as follows: + // 1. Take all the providers in the original config keyed by name.alias, + // we call this `previous` + // 2. Copy them all into a new map, we call this `next`. + // 3a. If the run has configuration specifying provider overrides, we copy + // only the specified providers from the test file into `next`. While + // doing this we ensure to preserve the name and alias from the + // original config. + // 3b. If the run has no override configuration, we copy all the providers + // from the test file into `next`, overriding all providers with name + // collisions from the original config. + // 4. We then modify the original configuration so that the providers it + // holds are the combination specified by the original config, the test + // file and the run file. + // 5. We then return a function that resets the original config back to + // its original state. This can be called by the surrounding test once + // completed so future run blocks can safely execute. + + // First, initialise `previous` and `next`. `previous` contains a backup of + // the providers from the original config. `next` contains the set of + // providers that will be used by the test. `next` starts with the set of + // providers from the original config. + previous := c.Module.ProviderConfigs + next := make(map[string]*Provider) + for key, value := range previous { + next[key] = value + } + + if run != nil && len(run.Providers) > 0 { + // Then we'll only copy over and overwrite the specific providers asked + // for by this run block. + + for _, ref := range run.Providers { + + testProvider, ok := file.Providers[ref.InParent.String()] + if !ok { + // Then this reference was invalid as we didn't have the + // specified provider in the parent. This should have been + // caught earlier in validation anyway so is unlikely to happen. + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: fmt.Sprintf("Missing provider definition for %s", ref.InParent.String()), + Detail: "This provider block references a provider definition that does not exist.", + Subject: ref.InParent.NameRange.Ptr(), + }) + continue + } + + next[ref.InChild.String()] = &Provider{ + Name: ref.InChild.Name, + NameRange: ref.InChild.NameRange, + Alias: ref.InChild.Alias, + AliasRange: ref.InChild.AliasRange, + Version: testProvider.Version, + Config: testProvider.Config, + DeclRange: testProvider.DeclRange, + } + + } + } else { + // Otherwise, let's copy over and overwrite all providers specified by + // the test file itself. + for key, provider := range file.Providers { + next[key] = provider + } + } + + c.Module.ProviderConfigs = next + return func() { + // Reset the original config within the returned function. + c.Module.ProviderConfigs = previous + }, diags +} diff --git a/internal/configs/config_test.go b/internal/configs/config_test.go index 52d0da37f781..5cd31110f59d 100644 --- a/internal/configs/config_test.go +++ b/internal/configs/config_test.go @@ -4,12 +4,17 @@ package configs import ( + "bytes" + "fmt" "os" + "strings" "testing" "github.com/go-test/deep" "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hclparse" "github.com/zclconf/go-cty/cty" version "github.com/hashicorp/go-version" @@ -597,3 +602,228 @@ func TestConfigImportProviderWithNoResourceProvider(t *testing.T) { Use the provider argument in the target resource block to configure the provider for a resource with explicit provider configuration.`, }) } + +func TestTransformForTest(t *testing.T) { + + str := func(providers map[string]string) string { + var buffer bytes.Buffer + for key, config := range providers { + buffer.WriteString(fmt.Sprintf("%s: %s\n", key, config)) + } + return buffer.String() + } + + convertToProviders := func(t *testing.T, contents map[string]string) map[string]*Provider { + t.Helper() + + providers := make(map[string]*Provider) + for key, content := range contents { + parser := hclparse.NewParser() + file, diags := parser.ParseHCL([]byte(content), fmt.Sprintf("%s.hcl", key)) + if diags.HasErrors() { + t.Fatal(diags.Error()) + } + + provider := &Provider{ + Config: file.Body, + } + + parts := strings.Split(key, ".") + provider.Name = parts[0] + if len(parts) > 1 { + provider.Alias = parts[1] + } + + providers[key] = provider + } + return providers + } + + validate := func(t *testing.T, msg string, expected map[string]string, actual map[string]*Provider) { + t.Helper() + + converted := make(map[string]string) + for key, provider := range actual { + content, err := provider.Config.Content(&hcl.BodySchema{ + Attributes: []hcl.AttributeSchema{ + {Name: "source", Required: true}, + }, + }) + if err != nil { + t.Fatal(err) + } + + source, diags := content.Attributes["source"].Expr.Value(nil) + if diags.HasErrors() { + t.Fatal(diags.Error()) + } + converted[key] = fmt.Sprintf("source = %q", source.AsString()) + } + + if diff := cmp.Diff(expected, converted); len(diff) > 0 { + t.Errorf("%s\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", msg, str(expected), str(converted), diff) + } + } + + tcs := map[string]struct { + configProviders map[string]string + fileProviders map[string]string + runProviders []PassedProviderConfig + expectedProviders map[string]string + expectedErrors []string + }{ + "empty": { + configProviders: make(map[string]string), + expectedProviders: make(map[string]string), + }, + "only providers in config": { + configProviders: map[string]string{ + "foo": "source = \"config\"", + "bar": "source = \"config\"", + }, + expectedProviders: map[string]string{ + "foo": "source = \"config\"", + "bar": "source = \"config\"", + }, + }, + "only providers in test file": { + configProviders: make(map[string]string), + fileProviders: map[string]string{ + "foo": "source = \"testfile\"", + "bar": "source = \"testfile\"", + }, + expectedProviders: map[string]string{ + "foo": "source = \"testfile\"", + "bar": "source = \"testfile\"", + }, + }, + "only providers in run block": { + configProviders: make(map[string]string), + runProviders: []PassedProviderConfig{ + { + InChild: &ProviderConfigRef{ + Name: "foo", + }, + InParent: &ProviderConfigRef{ + Name: "bar", + }, + }, + }, + expectedProviders: make(map[string]string), + expectedErrors: []string{ + ":0,0-0: Missing provider definition for bar; This provider block references a provider definition that does not exist.", + }, + }, + "subset of providers in test file": { + configProviders: make(map[string]string), + fileProviders: map[string]string{ + "bar": "source = \"testfile\"", + }, + runProviders: []PassedProviderConfig{ + { + InChild: &ProviderConfigRef{ + Name: "foo", + }, + InParent: &ProviderConfigRef{ + Name: "bar", + }, + }, + }, + expectedProviders: map[string]string{ + "foo": "source = \"testfile\"", + }, + }, + "overrides providers in config": { + configProviders: map[string]string{ + "foo": "source = \"config\"", + "bar": "source = \"config\"", + }, + fileProviders: map[string]string{ + "bar": "source = \"testfile\"", + }, + expectedProviders: map[string]string{ + "foo": "source = \"config\"", + "bar": "source = \"testfile\"", + }, + }, + "overrides subset of providers in config": { + configProviders: map[string]string{ + "foo": "source = \"config\"", + "bar": "source = \"config\"", + }, + fileProviders: map[string]string{ + "foo": "source = \"testfile\"", + "bar": "source = \"testfile\"", + }, + runProviders: []PassedProviderConfig{ + { + InChild: &ProviderConfigRef{ + Name: "bar", + }, + InParent: &ProviderConfigRef{ + Name: "bar", + }, + }, + }, + expectedProviders: map[string]string{ + "foo": "source = \"config\"", + "bar": "source = \"testfile\"", + }, + }, + "handles aliases": { + configProviders: map[string]string{ + "foo.primary": "source = \"config\"", + "foo.secondary": "source = \"config\"", + }, + fileProviders: map[string]string{ + "foo": "source = \"testfile\"", + }, + runProviders: []PassedProviderConfig{ + { + InChild: &ProviderConfigRef{ + Name: "foo.secondary", + }, + InParent: &ProviderConfigRef{ + Name: "foo", + }, + }, + }, + expectedProviders: map[string]string{ + "foo.primary": "source = \"config\"", + "foo.secondary": "source = \"testfile\"", + }, + }, + } + for name, tc := range tcs { + t.Run(name, func(t *testing.T) { + config := &Config{ + Module: &Module{ + ProviderConfigs: convertToProviders(t, tc.configProviders), + }, + } + + file := &TestFile{ + Providers: convertToProviders(t, tc.fileProviders), + } + + run := &TestRun{ + Providers: tc.runProviders, + } + + reset, diags := config.TransformForTest(run, file) + + var actualErrs []string + for _, err := range diags.Errs() { + actualErrs = append(actualErrs, err.Error()) + } + if diff := cmp.Diff(actualErrs, tc.expectedErrors, cmpopts.IgnoreUnexported()); len(diff) > 0 { + t.Errorf("unmatched errors\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", strings.Join(tc.expectedErrors, "\n"), strings.Join(actualErrs, "\n"), diff) + } + + validate(t, "after transform mismatch", tc.expectedProviders, config.Module.ProviderConfigs) + reset() + validate(t, "after reset mismatch", tc.configProviders, config.Module.ProviderConfigs) + + }) + } +} diff --git a/internal/configs/configschema/coerce_value.go b/internal/configs/configschema/coerce_value.go index 9c47257d45f5..fec6b3511814 100644 --- a/internal/configs/configschema/coerce_value.go +++ b/internal/configs/configschema/coerce_value.go @@ -67,10 +67,8 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { val = in.GetAttr(name) case attrS.Computed || attrS.Optional: val = cty.NullVal(attrType) - case attrS.Required: - return cty.UnknownVal(impliedType), path.NewErrorf("attribute %q is required", name) default: - return cty.UnknownVal(impliedType), path.NewErrorf("attribute %q has none of required, optional, or computed set", name) + return cty.UnknownVal(impliedType), path.NewErrorf("attribute %q is required", name) } val, err := convert.Convert(val, attrConvType) diff --git a/internal/configs/configschema/coerce_value_test.go b/internal/configs/configschema/coerce_value_test.go index 697724c6c10e..3deeff719b02 100644 --- a/internal/configs/configschema/coerce_value_test.go +++ b/internal/configs/configschema/coerce_value_test.go @@ -446,20 +446,6 @@ func TestCoerceValue(t *testing.T) { }), ``, }, - "omitted attribute requirements": { - &Block{ - Attributes: map[string]*Attribute{ - "foo": { - Type: cty.String, - }, - }, - }, - cty.EmptyObjectVal, - cty.ObjectVal(map[string]cty.Value{ - "foo": cty.UnknownVal(cty.String), - }), - `attribute "foo" has none of required, optional, or computed set`, - }, "dynamic value attributes": { &Block{ BlockTypes: map[string]*NestedBlock{ diff --git a/internal/configs/import.go b/internal/configs/import.go index a4d6b5869da9..f77c04ab91c9 100644 --- a/internal/configs/import.go +++ b/internal/configs/import.go @@ -5,23 +5,12 @@ package configs import ( "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/hclsyntax" - hcljson "github.com/hashicorp/hcl/v2/json" "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/zclconf/go-cty/cty" ) type Import struct { ID hcl.Expression - - To hcl.Expression - // The To address may not be resolvable immediately if it contains dynamic - // index expressions, so we will extract the ConfigResource address and - // store it here for reference. - ToResource addrs.ConfigResource - - ForEach hcl.Expression + To addrs.AbsResourceInstance ProviderConfigRef *ProviderConfigRef Provider addrs.Provider @@ -44,35 +33,17 @@ func decodeImportBlock(block *hcl.Block) (*Import, hcl.Diagnostics) { } if attr, exists := content.Attributes["to"]; exists { - toExpr, jsDiags := unwrapJSONRefExpr(attr.Expr) - diags = diags.Extend(jsDiags) - if diags.HasErrors() { - return imp, diags + traversal, traversalDiags := hcl.AbsTraversalForExpr(attr.Expr) + diags = append(diags, traversalDiags...) + if !traversalDiags.HasErrors() { + to, toDiags := addrs.ParseAbsResourceInstance(traversal) + diags = append(diags, toDiags.ToHCL()...) + imp.To = to } - - imp.To = toExpr - - addr, toDiags := parseConfigResourceFromExpression(imp.To) - diags = diags.Extend(toDiags.ToHCL()) - - if addr.Resource.Mode != addrs.ManagedResourceMode { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid import address", - Detail: "Only managed resources can be imported.", - Subject: attr.Range.Ptr(), - }) - } - - imp.ToResource = addr - } - - if attr, exists := content.Attributes["for_each"]; exists { - imp.ForEach = attr.Expr } if attr, exists := content.Attributes["provider"]; exists { - if len(imp.ToResource.Module) > 0 { + if len(imp.To.Module) > 0 { diags = append(diags, &hcl.Diagnostic{ Severity: hcl.DiagError, Summary: "Invalid import provider argument", @@ -95,9 +66,6 @@ var importBlockSchema = &hcl.BodySchema{ { Name: "provider", }, - { - Name: "for_each", - }, { Name: "id", Required: true, @@ -108,117 +76,3 @@ var importBlockSchema = &hcl.BodySchema{ }, }, } - -// parseResourceInstanceFromExpression takes an arbitrary expression -// representing a resource instance, and parses out the static ConfigResource -// skipping an variable index expressions. This is used to connect an import -// block's "to" to the configuration address before the full instance -// expressions are evaluated. -func parseConfigResourceFromExpression(expr hcl.Expression) (addrs.ConfigResource, tfdiags.Diagnostics) { - traversal, hcdiags := exprToResourceTraversal(expr) - if hcdiags.HasErrors() { - return addrs.ConfigResource{}, tfdiags.Diagnostics(nil).Append(hcdiags) - } - - addr, diags := addrs.ParseAbsResourceInstance(traversal) - if diags.HasErrors() { - return addrs.ConfigResource{}, diags - } - - return addr.ConfigResource(), diags -} - -// unwrapJSONRefExpr takes a string expression from a JSON configuration, -// and re-evaluates the string as HCL. If the expression is not JSON, the -// original expression is returned directly. -func unwrapJSONRefExpr(expr hcl.Expression) (hcl.Expression, hcl.Diagnostics) { - if !hcljson.IsJSONExpression(expr) { - return expr, nil - } - - // We can abuse the hcl json api and rely on the fact that calling - // Value on a json expression with no EvalContext will return the - // raw string. We can then parse that as normal hcl syntax, and - // continue with the decoding. - v, diags := expr.Value(nil) - if diags.HasErrors() { - return nil, diags - } - - // the JSON representation can only be a string - if v.Type() != cty.String { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid reference expression", - Detail: "A single reference string is required.", - Subject: expr.Range().Ptr(), - }) - - return nil, diags - } - - rng := expr.Range() - expr, ds := hclsyntax.ParseExpression([]byte(v.AsString()), rng.Filename, rng.Start) - diags = diags.Extend(ds) - return expr, diags -} - -// exprToResourceTraversal is used to parse the import block's to expression, -// which must be a resource instance, but may contain limited variables with -// index expressions. Since we only need the ConfigResource to connect the -// import to the configuration, we skip any index expressions. -func exprToResourceTraversal(expr hcl.Expression) (hcl.Traversal, hcl.Diagnostics) { - var trav hcl.Traversal - var diags hcl.Diagnostics - - switch e := expr.(type) { - case *hclsyntax.RelativeTraversalExpr: - t, d := exprToResourceTraversal(e.Source) - diags = diags.Extend(d) - trav = append(trav, t...) - trav = append(trav, e.Traversal...) - - case *hclsyntax.ScopeTraversalExpr: - // a static reference, we can just append the traversal - trav = append(trav, e.Traversal...) - - case *hclsyntax.IndexExpr: - // Get the collection from the index expression, we don't need the - // index for a ConfigResource - t, d := exprToResourceTraversal(e.Collection) - diags = diags.Extend(d) - if diags.HasErrors() { - return nil, diags - } - trav = append(trav, t...) - - default: - // if we don't recognise the expression type (which means we are likely - // dealing with a test mock), try and interpret this as an absolute - // traversal - t, d := hcl.AbsTraversalForExpr(e) - diags = diags.Extend(d) - trav = append(trav, t...) - } - - return trav, diags -} - -// parseImportToStatic attempts to parse the To address of an import block -// statically to get the resource address. This returns false when the address -// cannot be parsed, which is usually a result of dynamic index expressions -// using for_each. -func parseImportToStatic(expr hcl.Expression) (addrs.AbsResourceInstance, bool) { - // we may have a nil expression in some error cases, which we can just - // false to avoid the parsing - if expr == nil { - return addrs.AbsResourceInstance{}, false - } - - var toDiags tfdiags.Diagnostics - traversal, hd := hcl.AbsTraversalForExpr(expr) - toDiags = toDiags.Append(hd) - to, td := addrs.ParseAbsResourceInstance(traversal) - toDiags = toDiags.Append(td) - return to, !toDiags.HasErrors() -} diff --git a/internal/configs/import_test.go b/internal/configs/import_test.go index 49432ccc931a..d00acccecb2c 100644 --- a/internal/configs/import_test.go +++ b/internal/configs/import_test.go @@ -5,59 +5,19 @@ package configs import ( "fmt" - "reflect" "testing" + "github.com/google/go-cmp/cmp" "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/hashicorp/hcl/v2/hcltest" "github.com/hashicorp/terraform/internal/addrs" "github.com/zclconf/go-cty/cty" ) -func TestParseConfigResourceFromExpression(t *testing.T) { - mustExpr := func(expr hcl.Expression, diags hcl.Diagnostics) hcl.Expression { - if diags != nil { - panic(diags.Error()) - } - return expr - } - - tests := []struct { - expr hcl.Expression - expect addrs.ConfigResource - }{ - { - mustExpr(hclsyntax.ParseExpression([]byte("test_instance.bar"), "my_traversal", hcl.Pos{})), - mustAbsResourceInstanceAddr("test_instance.bar").ConfigResource(), - }, - - // parsing should skip the each.key variable - { - mustExpr(hclsyntax.ParseExpression([]byte("test_instance.bar[each.key]"), "my_traversal", hcl.Pos{})), - mustAbsResourceInstanceAddr("test_instance.bar").ConfigResource(), - }, - - // nested modules must work too - { - mustExpr(hclsyntax.ParseExpression([]byte("module.foo[each.key].test_instance.bar[each.key]"), "my_traversal", hcl.Pos{})), - mustAbsResourceInstanceAddr("module.foo.test_instance.bar").ConfigResource(), - }, - } - - for i, tc := range tests { - t.Run(fmt.Sprintf("%d-%s", i, tc.expect), func(t *testing.T) { - - got, diags := parseConfigResourceFromExpression(tc.expr) - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } - if !got.Equal(tc.expect) { - t.Fatalf("got %s, want %s", got, tc.expect) - } - }) - } -} +var ( + typeComparer = cmp.Comparer(cty.Type.Equals) + valueComparer = cmp.Comparer(cty.Value.RawEquals) +) func TestImportBlock_decode(t *testing.T) { blockRange := hcl.Range{ @@ -96,9 +56,9 @@ func TestImportBlock_decode(t *testing.T) { DefRange: blockRange, }, &Import{ - ToResource: mustAbsResourceInstanceAddr("test_instance.bar").ConfigResource(), - ID: foo_str_expr, - DeclRange: blockRange, + To: mustAbsResourceInstanceAddr("test_instance.bar"), + ID: foo_str_expr, + DeclRange: blockRange, }, ``, }, @@ -120,9 +80,9 @@ func TestImportBlock_decode(t *testing.T) { DefRange: blockRange, }, &Import{ - ToResource: mustAbsResourceInstanceAddr("test_instance.bar[\"one\"]").ConfigResource(), - ID: foo_str_expr, - DeclRange: blockRange, + To: mustAbsResourceInstanceAddr("test_instance.bar[\"one\"]"), + ID: foo_str_expr, + DeclRange: blockRange, }, ``, }, @@ -144,9 +104,9 @@ func TestImportBlock_decode(t *testing.T) { DefRange: blockRange, }, &Import{ - ToResource: mustAbsResourceInstanceAddr("module.bar.test_instance.bar").ConfigResource(), - ID: foo_str_expr, - DeclRange: blockRange, + To: mustAbsResourceInstanceAddr("module.bar.test_instance.bar"), + ID: foo_str_expr, + DeclRange: blockRange, }, ``, }, @@ -164,31 +124,12 @@ func TestImportBlock_decode(t *testing.T) { DefRange: blockRange, }, &Import{ - ToResource: mustAbsResourceInstanceAddr("test_instance.bar").ConfigResource(), - DeclRange: blockRange, - }, - "Missing required argument", - }, - "error: missing to argument": { - &hcl.Block{ - Type: "import", - Body: hcltest.MockBody(&hcl.BodyContent{ - Attributes: hcl.Attributes{ - "to": { - Name: "to", - Expr: bar_expr, - }, - }, - }), - DefRange: blockRange, - }, - &Import{ - ID: foo_str_expr, + To: mustAbsResourceInstanceAddr("test_instance.bar"), DeclRange: blockRange, }, "Missing required argument", }, - "error: data source": { + "error: missing to argument": { &hcl.Block{ Type: "import", Body: hcltest.MockBody(&hcl.BodyContent{ @@ -197,10 +138,6 @@ func TestImportBlock_decode(t *testing.T) { Name: "id", Expr: foo_str_expr, }, - "to": { - Name: "to", - Expr: hcltest.MockExprTraversalSrc("data.test_instance.bar"), - }, }, }), DefRange: blockRange, @@ -209,7 +146,7 @@ func TestImportBlock_decode(t *testing.T) { ID: foo_str_expr, DeclRange: blockRange, }, - "Invalid import address", + "Missing required argument", }, } @@ -228,16 +165,8 @@ func TestImportBlock_decode(t *testing.T) { t.Fatal("expected error") } - if diags.HasErrors() { - return - } - - if !got.ToResource.Equal(test.want.ToResource) { - t.Errorf("expected resource %q got %q", test.want.ToResource, got.ToResource) - } - - if !reflect.DeepEqual(got.ID, test.want.ID) { - t.Errorf("expected ID %q got %q", test.want.ID, got.ID) + if !cmp.Equal(got, test.want, typeComparer, valueComparer) { + t.Fatalf("wrong result: %s", cmp.Diff(got, test.want)) } }) } diff --git a/internal/configs/mock_provider.go b/internal/configs/mock_provider.go deleted file mode 100644 index 766852d8e111..000000000000 --- a/internal/configs/mock_provider.go +++ /dev/null @@ -1,383 +0,0 @@ -package configs - -import ( - "fmt" - - "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/gohcl" - "github.com/hashicorp/hcl/v2/hclsyntax" - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -func decodeMockProviderBlock(block *hcl.Block) (*Provider, hcl.Diagnostics) { - var diags hcl.Diagnostics - - content, config, moreDiags := block.Body.PartialContent(mockProviderSchema) - diags = append(diags, moreDiags...) - - name := block.Labels[0] - nameDiags := checkProviderNameNormalized(name, block.DefRange) - diags = append(diags, nameDiags...) - if nameDiags.HasErrors() { - // If the name is invalid then we mustn't produce a result because - // downstream could try to use it as a provider type and then crash. - return nil, diags - } - - provider := &Provider{ - Name: name, - NameRange: block.LabelRanges[0], - DeclRange: block.DefRange, - - Config: config, - - // Mark this provider as being mocked. - Mock: true, - } - - if attr, exists := content.Attributes["alias"]; exists { - valDiags := gohcl.DecodeExpression(attr.Expr, nil, &provider.Alias) - diags = append(diags, valDiags...) - provider.AliasRange = attr.Expr.Range().Ptr() - - if !hclsyntax.ValidIdentifier(provider.Alias) { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid provider configuration alias", - Detail: fmt.Sprintf("An alias must be a valid name. %s", badIdentifierDetail), - }) - } - } - - var dataDiags hcl.Diagnostics - provider.MockData, dataDiags = decodeMockDataBody(config) - diags = append(diags, dataDiags...) - - // TODO(liamcervante): Add support for the "source" attribute before the - // v1.7 release. - - return provider, diags -} - -// MockData packages up all the available mock and override data available to -// a mocked provider. -type MockData struct { - MockResources map[string]*MockResource - MockDataSources map[string]*MockResource - Overrides addrs.Map[addrs.Targetable, *Override] -} - -// MockResource maps a resource or data source type and name to a set of values -// for that resource. -type MockResource struct { - Mode addrs.ResourceMode - Type string - - Defaults cty.Value - - Range hcl.Range - TypeRange hcl.Range - DefaultsRange hcl.Range -} - -// Override targets a specific module, resource or data source with a set of -// replacement values that should be used in place of whatever the underlying -// provider would normally do. -type Override struct { - Target *addrs.Target - Values cty.Value - - Range hcl.Range - TypeRange hcl.Range - TargetRange hcl.Range - ValuesRange hcl.Range -} - -func decodeMockDataBody(body hcl.Body) (*MockData, hcl.Diagnostics) { - var diags hcl.Diagnostics - - content, contentDiags := body.Content(mockDataSchema) - diags = append(diags, contentDiags...) - - data := &MockData{ - MockResources: make(map[string]*MockResource), - MockDataSources: make(map[string]*MockResource), - Overrides: addrs.MakeMap[addrs.Targetable, *Override](), - } - - for _, block := range content.Blocks { - switch block.Type { - case "mock_resource", "mock_data": - resource, resourceDiags := decodeMockResourceBlock(block) - diags = append(diags, resourceDiags...) - - if resource != nil { - switch resource.Mode { - case addrs.ManagedResourceMode: - if previous, ok := data.MockResources[resource.Type]; ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate mock_resource block", - Detail: fmt.Sprintf("A mock_resource block for %s has already been defined at %s.", resource.Type, previous.Range), - Subject: resource.TypeRange.Ptr(), - }) - continue - } - data.MockResources[resource.Type] = resource - case addrs.DataResourceMode: - if previous, ok := data.MockDataSources[resource.Type]; ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate mock_data block", - Detail: fmt.Sprintf("A mock_data block for %s has already been defined at %s.", resource.Type, previous.Range), - Subject: resource.TypeRange.Ptr(), - }) - continue - } - data.MockDataSources[resource.Type] = resource - } - } - case "override_resource": - override, overrideDiags := decodeOverrideResourceBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := data.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_resource block", - Detail: fmt.Sprintf("An override_resource block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - data.Overrides.Put(subject, override) - } - case "override_data": - override, overrideDiags := decodeOverrideDataBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := data.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_data block", - Detail: fmt.Sprintf("An override_data block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - data.Overrides.Put(subject, override) - } - } - } - - return data, diags -} - -func decodeMockResourceBlock(block *hcl.Block) (*MockResource, hcl.Diagnostics) { - var diags hcl.Diagnostics - - content, contentDiags := block.Body.Content(mockResourceSchema) - diags = append(diags, contentDiags...) - - resource := &MockResource{ - Type: block.Labels[0], - Range: block.DefRange, - TypeRange: block.LabelRanges[0], - } - - switch block.Type { - case "mock_resource": - resource.Mode = addrs.ManagedResourceMode - case "mock_data": - resource.Mode = addrs.DataResourceMode - } - - if defaults, exists := content.Attributes["defaults"]; exists { - var defaultDiags hcl.Diagnostics - resource.DefaultsRange = defaults.Range - resource.Defaults, defaultDiags = defaults.Expr.Value(nil) - diags = append(diags, defaultDiags...) - } else { - // It's fine if we don't have any defaults, just means we'll generate - // values for everything ourselves. - resource.Defaults = cty.NilVal - } - - return resource, diags -} - -func decodeOverrideModuleBlock(block *hcl.Block) (*Override, hcl.Diagnostics) { - override, diags := decodeOverrideBlock(block, "outputs", "override_module") - - if override.Target != nil { - switch override.Target.Subject.AddrType() { - case addrs.ModuleAddrType, addrs.ModuleInstanceAddrType: - // Do nothing, we're good here. - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid override target", - Detail: fmt.Sprintf("You can only target modules from override_module blocks, not %s.", override.Target.Subject), - Subject: override.TargetRange.Ptr(), - }) - return nil, diags - } - } - - return override, diags -} - -func decodeOverrideResourceBlock(block *hcl.Block) (*Override, hcl.Diagnostics) { - override, diags := decodeOverrideBlock(block, "values", "override_resource") - - if override.Target != nil { - var mode addrs.ResourceMode - - switch override.Target.Subject.AddrType() { - case addrs.AbsResourceInstanceAddrType: - subject := override.Target.Subject.(addrs.AbsResourceInstance) - mode = subject.Resource.Resource.Mode - case addrs.AbsResourceAddrType: - subject := override.Target.Subject.(addrs.AbsResource) - mode = subject.Resource.Mode - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid override target", - Detail: fmt.Sprintf("You can only target resources from override_resource blocks, not %s.", override.Target.Subject), - Subject: override.TargetRange.Ptr(), - }) - return nil, diags - } - - if mode != addrs.ManagedResourceMode { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid override target", - Detail: fmt.Sprintf("You can only target resources from override_resource blocks, not %s.", override.Target.Subject), - Subject: override.TargetRange.Ptr(), - }) - return nil, diags - } - } - - return override, diags -} - -func decodeOverrideDataBlock(block *hcl.Block) (*Override, hcl.Diagnostics) { - override, diags := decodeOverrideBlock(block, "values", "override_data") - - if override.Target != nil { - var mode addrs.ResourceMode - - switch override.Target.Subject.AddrType() { - case addrs.AbsResourceInstanceAddrType: - subject := override.Target.Subject.(addrs.AbsResourceInstance) - mode = subject.Resource.Resource.Mode - case addrs.AbsResourceAddrType: - subject := override.Target.Subject.(addrs.AbsResource) - mode = subject.Resource.Mode - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid override target", - Detail: fmt.Sprintf("You can only target data sources from override_data blocks, not %s.", override.Target.Subject), - Subject: override.TargetRange.Ptr(), - }) - return nil, diags - } - - if mode != addrs.DataResourceMode { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid override target", - Detail: fmt.Sprintf("You can only target data sources from override_data blocks, not %s.", override.Target.Subject), - Subject: override.TargetRange.Ptr(), - }) - return nil, diags - } - } - - return override, diags -} - -func decodeOverrideBlock(block *hcl.Block, attributeName string, blockName string) (*Override, hcl.Diagnostics) { - var diags hcl.Diagnostics - - content, contentDiags := block.Body.Content(&hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - {Name: "target"}, - {Name: attributeName}, - }, - }) - diags = append(diags, contentDiags...) - - override := &Override{ - Range: block.DefRange, - TypeRange: block.TypeRange, - } - - if target, exists := content.Attributes["target"]; exists { - override.TargetRange = target.Range - traversal, traversalDiags := hcl.AbsTraversalForExpr(target.Expr) - diags = append(diags, traversalDiags...) - if traversal != nil { - var targetDiags tfdiags.Diagnostics - override.Target, targetDiags = addrs.ParseTarget(traversal) - diags = append(diags, targetDiags.ToHCL()...) - } - } else { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing target attribute", - Detail: fmt.Sprintf("%s blocks must specify a target address.", blockName), - Subject: override.Range.Ptr(), - }) - } - - if attribute, exists := content.Attributes[attributeName]; exists { - var valueDiags hcl.Diagnostics - override.ValuesRange = attribute.Range - override.Values, valueDiags = attribute.Expr.Value(nil) - diags = append(diags, valueDiags...) - } else { - // It's fine if we don't have any values, just means we'll generate - // values for everything ourselves. - override.Values = cty.NilVal - } - - return override, diags -} - -var mockProviderSchema = &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "alias", - }, - { - Name: "source", - }, - }, -} - -var mockDataSchema = &hcl.BodySchema{ - Blocks: []hcl.BlockHeaderSchema{ - {Type: "mock_resource", LabelNames: []string{"type"}}, - {Type: "mock_data", LabelNames: []string{"type"}}, - {Type: "override_resource"}, - {Type: "override_data"}, - }, -} - -var mockResourceSchema = &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - {Name: "defaults"}, - }, -} diff --git a/internal/configs/module.go b/internal/configs/module.go index bec7e19e4f17..c7696013b2cc 100644 --- a/internal/configs/module.go +++ b/internal/configs/module.go @@ -421,18 +421,15 @@ func (m *Module) appendFile(file *File) hcl.Diagnostics { m.Moved = append(m.Moved, file.Moved...) for _, i := range file.Import { - iTo, iToOK := parseImportToStatic(i.To) for _, mi := range m.Import { - // Try to detect duplicate import targets. We need to see if the to - // address can be parsed statically. - miTo, miToOK := parseImportToStatic(mi.To) - if iToOK && miToOK && iTo.Equal(miTo) { + if i.To.Equal(mi.To) { diags = append(diags, &hcl.Diagnostic{ Severity: hcl.DiagError, - Summary: fmt.Sprintf("Duplicate import configuration for %q", i.ToResource), - Detail: fmt.Sprintf("An import block for the resource %q was already declared at %s. A resource can have only one import block.", i.ToResource, mi.DeclRange), - Subject: i.To.Range().Ptr(), + Summary: fmt.Sprintf("Duplicate import configuration for %q", i.To), + Detail: fmt.Sprintf("An import block for the resource %q was already declared at %s. A resource can have only one import block.", i.To, mi.DeclRange), + Subject: &i.DeclRange, }) + continue } } @@ -442,7 +439,7 @@ func (m *Module) appendFile(file *File) hcl.Diagnostics { Alias: i.ProviderConfigRef.Alias, }) } else { - implied, err := addrs.ParseProviderPart(i.ToResource.Resource.ImpliedProvider()) + implied, err := addrs.ParseProviderPart(i.To.Resource.Resource.ImpliedProvider()) if err == nil { i.Provider = m.ImpliedProviderForUnqualifiedType(implied) } @@ -453,10 +450,10 @@ func (m *Module) appendFile(file *File) hcl.Diagnostics { // It is invalid for any import block to have a "to" argument matching // any moved block's "from" argument. for _, mb := range m.Moved { - // FIXME: This is not correct for moved modules, and won't catch - // all combinations of expanded imports (though preventing - // collisions based on ConfigResource alone may be sufficient) - if mb.From.String() == i.ToResource.String() { + // Comparing string serialisations is good enough here, because we + // only care about equality in the case that both addresses are + // AbsResourceInstances. + if mb.From.String() == i.To.String() { diags = append(diags, &hcl.Diagnostic{ Severity: hcl.DiagError, Summary: "Cannot import to a move source", diff --git a/internal/configs/parser_config_dir_test.go b/internal/configs/parser_config_dir_test.go index 7f6132fc718e..6b06c0640e57 100644 --- a/internal/configs/parser_config_dir_test.go +++ b/internal/configs/parser_config_dir_test.go @@ -6,7 +6,6 @@ package configs import ( "fmt" "io/ioutil" - "os" "path/filepath" "testing" @@ -121,7 +120,6 @@ func TestParserLoadConfigDirWithTests(t *testing.T) { "testdata/valid-modules/with-tests-nested", "testdata/valid-modules/with-tests-very-nested", "testdata/valid-modules/with-tests-json", - "testdata/valid-modules/with-mocks", } for _, directory := range directories { @@ -148,88 +146,6 @@ func TestParserLoadConfigDirWithTests(t *testing.T) { } } -func TestParserLoadTestFiles_Invalid(t *testing.T) { - - tcs := map[string][]string{ - "duplicate_data_overrides": { - "duplicate_data_overrides.tftest.hcl:7,3-16: Duplicate override_data block; An override_data block targeting data.aws_instance.test has already been defined at duplicate_data_overrides.tftest.hcl:2,3-16.", - "duplicate_data_overrides.tftest.hcl:18,1-14: Duplicate override_data block; An override_data block targeting data.aws_instance.test has already been defined at duplicate_data_overrides.tftest.hcl:13,1-14.", - "duplicate_data_overrides.tftest.hcl:29,3-16: Duplicate override_data block; An override_data block targeting data.aws_instance.test has already been defined at duplicate_data_overrides.tftest.hcl:24,3-16.", - }, - "duplicate_mixed_providers": { - "duplicate_mixed_providers.tftest.hcl:3,1-20: Duplicate provider block; A provider for aws is already defined at duplicate_mixed_providers.tftest.hcl:1,10-15.", - "duplicate_mixed_providers.tftest.hcl:9,1-20: Duplicate provider block; A provider for aws.test is already defined at duplicate_mixed_providers.tftest.hcl:5,10-15.", - }, - "duplicate_mock_data_sources": { - "duplicate_mock_data_sources.tftest.hcl:7,13-27: Duplicate mock_data block; A mock_data block for aws_instance has already been defined at duplicate_mock_data_sources.tftest.hcl:3,3-27.", - }, - "duplicate_mock_providers": { - "duplicate_mock_providers.tftest.hcl:3,1-20: Duplicate provider block; A provider for aws is already defined at duplicate_mock_providers.tftest.hcl:1,15-20.", - "duplicate_mock_providers.tftest.hcl:9,1-20: Duplicate provider block; A provider for aws.test is already defined at duplicate_mock_providers.tftest.hcl:5,15-20.", - }, - "duplicate_mock_resources": { - "duplicate_mock_resources.tftest.hcl:7,17-31: Duplicate mock_resource block; A mock_resource block for aws_instance has already been defined at duplicate_mock_resources.tftest.hcl:3,3-31.", - }, - "duplicate_module_overrides": { - "duplicate_module_overrides.tftest.hcl:7,1-16: Duplicate override_module block; An override_module block targeting module.child has already been defined at duplicate_module_overrides.tftest.hcl:2,1-16.", - "duplicate_module_overrides.tftest.hcl:18,3-18: Duplicate override_module block; An override_module block targeting module.child has already been defined at duplicate_module_overrides.tftest.hcl:13,3-18.", - }, - "duplicate_providers": { - "duplicate_providers.tftest.hcl:3,1-15: Duplicate provider block; A provider for aws is already defined at duplicate_providers.tftest.hcl:1,10-15.", - "duplicate_providers.tftest.hcl:9,1-15: Duplicate provider block; A provider for aws.test is already defined at duplicate_providers.tftest.hcl:5,10-15.", - }, - "duplicate_resource_overrides": { - "duplicate_resource_overrides.tftest.hcl:7,3-20: Duplicate override_resource block; An override_resource block targeting aws_instance.test has already been defined at duplicate_resource_overrides.tftest.hcl:2,3-20.", - "duplicate_resource_overrides.tftest.hcl:18,1-18: Duplicate override_resource block; An override_resource block targeting aws_instance.test has already been defined at duplicate_resource_overrides.tftest.hcl:13,1-18.", - "duplicate_resource_overrides.tftest.hcl:29,3-20: Duplicate override_resource block; An override_resource block targeting aws_instance.test has already been defined at duplicate_resource_overrides.tftest.hcl:24,3-20.", - }, - "invalid_data_override": { - "invalid_data_override.tftest.hcl:6,1-14: Missing target attribute; override_data blocks must specify a target address.", - }, - "invalid_data_override_target": { - "invalid_data_override_target.tftest.hcl:8,3-24: Invalid override target; You can only target data sources from override_data blocks, not module.child.", - "invalid_data_override_target.tftest.hcl:3,3-31: Invalid override target; You can only target data sources from override_data blocks, not aws_instance.target.", - }, - "invalid_mock_data_sources": { - "invalid_mock_data_sources.tftest.hcl:7,13-16: Variables not allowed; Variables may not be used here.", - }, - "invalid_mock_resources": { - "invalid_mock_resources.tftest.hcl:7,13-16: Variables not allowed; Variables may not be used here.", - }, - "invalid_module_override": { - "invalid_module_override.tftest.hcl:5,1-16: Missing target attribute; override_module blocks must specify a target address.", - "invalid_module_override.tftest.hcl:11,3-9: Unsupported argument; An argument named \"values\" is not expected here.", - }, - "invalid_module_override_target": { - "invalid_module_override_target.tftest.hcl:3,3-31: Invalid override target; You can only target modules from override_module blocks, not aws_instance.target.", - "invalid_module_override_target.tftest.hcl:8,3-36: Invalid override target; You can only target modules from override_module blocks, not data.aws_instance.target.", - }, - "invalid_resource_override": { - "invalid_resource_override.tftest.hcl:6,1-18: Missing target attribute; override_resource blocks must specify a target address.", - }, - "invalid_resource_override_target": { - "invalid_resource_override_target.tftest.hcl:3,3-36: Invalid override target; You can only target resources from override_resource blocks, not data.aws_instance.target.", - "invalid_resource_override_target.tftest.hcl:8,3-24: Invalid override target; You can only target resources from override_resource blocks, not module.child.", - }, - } - - for name, expected := range tcs { - t.Run(name, func(t *testing.T) { - src, err := os.ReadFile(fmt.Sprintf("testdata/invalid-test-files/%s.tftest.hcl", name)) - if err != nil { - t.Fatal(err) - } - - parser := testParser(map[string]string{ - fmt.Sprintf("%s.tftest.hcl", name): string(src), - }) - - _, actual := parser.LoadTestFile(fmt.Sprintf("%s.tftest.hcl", name)) - assertExactDiagnostics(t, actual, expected) - }) - } -} - func TestParserLoadConfigDirWithTests_ReturnsWarnings(t *testing.T) { parser := NewParser(nil) mod, diags := parser.LoadConfigDirWithTests("testdata/valid-modules/with-tests", "not_real") diff --git a/internal/configs/provider.go b/internal/configs/provider.go index b9b0cf3d99dd..309beb169d7b 100644 --- a/internal/configs/provider.go +++ b/internal/configs/provider.go @@ -35,12 +35,6 @@ type Provider struct { // export this so providers don't need to be re-resolved. // This same field is also added to the ProviderConfigRef struct. providerType addrs.Provider - - // Mock and MockData declare this provider as a "mock_provider", which means - // it should use the data in MockData instead of actually initialising the - // provider. - Mock bool - MockData *MockData } func decodeProviderBlock(block *hcl.Block) (*Provider, hcl.Diagnostics) { @@ -66,10 +60,6 @@ func decodeProviderBlock(block *hcl.Block) (*Provider, hcl.Diagnostics) { NameRange: block.LabelRanges[0], Config: config, DeclRange: block.DefRange, - - // We'll just explicitly mark real providers as not being mocks even - // though this is the default. - Mock: false, } if attr, exists := content.Attributes["alias"]; exists { diff --git a/internal/configs/resource.go b/internal/configs/resource.go index c7aac7289dc5..fdbdabf38a62 100644 --- a/internal/configs/resource.go +++ b/internal/configs/resource.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/gohcl" "github.com/hashicorp/hcl/v2/hclsyntax" + hcljson "github.com/hashicorp/hcl/v2/json" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/lang" @@ -538,25 +539,34 @@ func decodeDataBlock(block *hcl.Block, override, nested bool) (*Resource, hcl.Di // replace_triggered_by expressions, ensuring they only contains references to // a single resource, and the only extra variables are count.index or each.key. func decodeReplaceTriggeredBy(expr hcl.Expression) ([]hcl.Expression, hcl.Diagnostics) { + // Since we are manually parsing the replace_triggered_by argument, we + // need to specially handle json configs, in which case the values will + // be json strings rather than hcl. To simplify parsing however we will + // decode the individual list elements, rather than the entire expression. + isJSON := hcljson.IsJSONExpression(expr) + exprs, diags := hcl.ExprList(expr) - if diags.HasErrors() { - return nil, diags - } for i, expr := range exprs { - // Since we are manually parsing the replace_triggered_by argument, we - // need to specially handle json configs, in which case the values will - // be json strings rather than hcl. To simplify parsing however we will - // decode the individual list elements, rather than the entire - // expression. - var jsDiags hcl.Diagnostics - expr, jsDiags = unwrapJSONRefExpr(expr) - diags = diags.Extend(jsDiags) - if diags.HasErrors() { - continue + if isJSON { + // We can abuse the hcl json api and rely on the fact that calling + // Value on a json expression with no EvalContext will return the + // raw string. We can then parse that as normal hcl syntax, and + // continue with the decoding. + v, ds := expr.Value(nil) + diags = diags.Extend(ds) + if diags.HasErrors() { + continue + } + + expr, ds = hclsyntax.ParseExpression([]byte(v.AsString()), "", expr.Range().Start) + diags = diags.Extend(ds) + if diags.HasErrors() { + continue + } + // make sure to swap out the expression we're returning too + exprs[i] = expr } - // re-assign the value in case it was replaced by a json expression - exprs[i] = expr refs, refDiags := lang.ReferencesInExpr(addrs.ParseRef, expr) for _, diag := range refDiags { diff --git a/internal/configs/test_file.go b/internal/configs/test_file.go index 89f704d82766..6154f1edd765 100644 --- a/internal/configs/test_file.go +++ b/internal/configs/test_file.go @@ -51,16 +51,10 @@ type TestFile struct { // Providers defines a set of providers that are available to run blocks // within this test file. // - // Some or all of these providers may be mocked providers. - // // If empty, tests should use the default providers for the module under // test. Providers map[string]*Provider - // Overrides contains any specific overrides that should be applied for this - // test outside any mock providers. - Overrides addrs.Map[addrs.Targetable, *Override] - // Runs defines the sequential list of run blocks that should be executed in // order. Runs []*TestRun @@ -94,10 +88,6 @@ type TestRun struct { // take precedence over the global definition. Variables map[string]hcl.Expression - // Overrides contains any specific overrides that should be applied for this - // run block only outside any mock providers or overrides from the file. - Overrides addrs.Map[addrs.Targetable, *Override] - // Providers specifies the set of providers that should be loaded into the // module for this run block. // @@ -191,7 +181,7 @@ type TestRunOptions struct { // Refresh is analogous to the -refresh=false Terraform plan option. Refresh bool - // Replace is analogous to the -replace=ADDRESS Terraform plan option. + // Replace is analogous to the -refresh=ADDRESS Terraform plan option. Replace []hcl.Traversal // Target is analogous to the -target=ADDRESS Terraform plan option. @@ -208,7 +198,6 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { tf := TestFile{ Providers: make(map[string]*Provider), - Overrides: addrs.MakeMap[addrs.Targetable, *Override](), } runBlockNames := make(map[string]hcl.Range) @@ -256,84 +245,7 @@ func loadTestFile(body hcl.Body) (*TestFile, hcl.Diagnostics) { provider, providerDiags := decodeProviderBlock(block) diags = append(diags, providerDiags...) if provider != nil { - key := provider.moduleUniqueKey() - if previous, exists := tf.Providers[key]; exists { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate provider block", - Detail: fmt.Sprintf("A provider for %s is already defined at %s.", key, previous.NameRange), - Subject: provider.DeclRange.Ptr(), - }) - continue - } - tf.Providers[key] = provider - } - case "mock_provider": - provider, providerDiags := decodeMockProviderBlock(block) - diags = append(diags, providerDiags...) - if provider != nil { - key := provider.moduleUniqueKey() - if previous, exists := tf.Providers[key]; exists { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate provider block", - Detail: fmt.Sprintf("A provider for %s is already defined at %s.", key, previous.NameRange), - Subject: provider.DeclRange.Ptr(), - }) - continue - } - tf.Providers[key] = provider - } - case "override_resource": - override, overrideDiags := decodeOverrideResourceBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := tf.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_resource block", - Detail: fmt.Sprintf("An override_resource block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - tf.Overrides.Put(subject, override) - } - case "override_data": - override, overrideDiags := decodeOverrideDataBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := tf.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_data block", - Detail: fmt.Sprintf("An override_data block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - tf.Overrides.Put(subject, override) - } - case "override_module": - override, overrideDiags := decodeOverrideModuleBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := tf.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_module block", - Detail: fmt.Sprintf("An override_module block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - tf.Overrides.Put(subject, override) + tf.Providers[provider.moduleUniqueKey()] = provider } } } @@ -348,8 +260,6 @@ func decodeTestRunBlock(block *hcl.Block) (*TestRun, hcl.Diagnostics) { diags = append(diags, contentDiags...) r := TestRun{ - Overrides: addrs.MakeMap[addrs.Targetable, *Override](), - Name: block.Labels[0], NameDeclRange: block.LabelRanges[0], DeclRange: block.DefRange, @@ -412,57 +322,6 @@ func decodeTestRunBlock(block *hcl.Block) (*TestRun, hcl.Diagnostics) { if !moduleDiags.HasErrors() { r.Module = module } - case "override_resource": - override, overrideDiags := decodeOverrideResourceBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := r.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_resource block", - Detail: fmt.Sprintf("An override_resource block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - r.Overrides.Put(subject, override) - } - case "override_data": - override, overrideDiags := decodeOverrideDataBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := r.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_data block", - Detail: fmt.Sprintf("An override_data block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - r.Overrides.Put(subject, override) - } - case "override_module": - override, overrideDiags := decodeOverrideModuleBlock(block) - diags = append(diags, overrideDiags...) - - if override != nil && override.Target != nil { - subject := override.Target.Subject - if previous, ok := r.Overrides.GetOk(subject); ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate override_module block", - Detail: fmt.Sprintf("An override_module block targeting %s has already been defined at %s.", subject, previous.Range), - Subject: override.Range.Ptr(), - }) - continue - } - r.Overrides.Put(subject, override) - } } } @@ -689,22 +548,9 @@ var testFileSchema = &hcl.BodySchema{ Type: "provider", LabelNames: []string{"name"}, }, - { - Type: "mock_provider", - LabelNames: []string{"name"}, - }, { Type: "variables", }, - { - Type: "override_resource", - }, - { - Type: "override_data", - }, - { - Type: "override_module", - }, }, } @@ -727,15 +573,6 @@ var testRunBlockSchema = &hcl.BodySchema{ { Type: "module", }, - { - Type: "override_resource", - }, - { - Type: "override_data", - }, - { - Type: "override_module", - }, }, } diff --git a/internal/configs/testdata/invalid-files/import-for-each.tf b/internal/configs/testdata/invalid-files/import-for-each.tf deleted file mode 100644 index 622d25ba34e5..000000000000 --- a/internal/configs/testdata/invalid-files/import-for-each.tf +++ /dev/null @@ -1,9 +0,0 @@ -import { - for_each = ["a", "b"] - to = invalid[each.value] - id = each.value -} - -resource "test_resource" "test" { - for_each = toset(["a", "b"]) -} diff --git a/internal/configs/testdata/invalid-files/import-for-each.tf.json b/internal/configs/testdata/invalid-files/import-for-each.tf.json deleted file mode 100644 index 041060d250fb..000000000000 --- a/internal/configs/testdata/invalid-files/import-for-each.tf.json +++ /dev/null @@ -1,14 +0,0 @@ -{ - "import": { - "for_each": "[\"a\", \"b\"]", - "to": ["test_resource.test[wrong]"], - "id": "${each.value}" - }, - "resource": { - "test_resource": { - "test": { - "for_each": {"a":"a","b":"b"} - } - } - } -} diff --git a/internal/configs/testdata/invalid-files/resource-rtb.tf.json b/internal/configs/testdata/invalid-files/resource-rtb.tf.json deleted file mode 100644 index a2389b2cc777..000000000000 --- a/internal/configs/testdata/invalid-files/resource-rtb.tf.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "resource": { - "test_object": { - "a": { - "count": 1, - "test_string": "new" - }, - "b": { - "count": 1, - "lifecycle": { - "replace_triggered_by": [ - {"test_object.a[count.index].test_string":"nope"} - ] - } - } - } - } -} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_data_overrides.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_data_overrides.tftest.hcl deleted file mode 100644 index f77b788d9698..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_data_overrides.tftest.hcl +++ /dev/null @@ -1,33 +0,0 @@ -mock_provider "aws" { - override_data { - target = data.aws_instance.test - values = {} - } - - override_data { - target = data.aws_instance.test - values = {} - } -} - -override_data { - target = data.aws_instance.test - values = {} -} - -override_data { - target = data.aws_instance.test - values = {} -} - -run "test" { - override_data { - target = data.aws_instance.test - values = {} - } - - override_data { - target = data.aws_instance.test - values = {} - } -} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_mixed_providers.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_mixed_providers.tftest.hcl deleted file mode 100644 index b319143f207d..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_mixed_providers.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -provider "aws" {} - -mock_provider "aws" {} - -provider "aws" { - alias = "test" -} - -mock_provider "aws" { - alias = "test" -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_mock_data_sources.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_mock_data_sources.tftest.hcl deleted file mode 100644 index 8ff168509332..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_mock_data_sources.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -mock_provider "aws" { - - mock_data "aws_instance" { - defaults = {} - } - - mock_data "aws_instance" { - defaults = {} - } - -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_mock_providers.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_mock_providers.tftest.hcl deleted file mode 100644 index 0ac2ae2bfeaa..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_mock_providers.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -mock_provider "aws" {} - -mock_provider "aws" {} - -mock_provider "aws" { - alias = "test" -} - -mock_provider "aws" { - alias = "test" -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_mock_resources.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_mock_resources.tftest.hcl deleted file mode 100644 index d95a5b1c50ab..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_mock_resources.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -mock_provider "aws" { - - mock_resource "aws_instance" { - defaults = {} - } - - mock_resource "aws_instance" { - defaults = {} - } - -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_module_overrides.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_module_overrides.tftest.hcl deleted file mode 100644 index 176d3e5f5f71..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_module_overrides.tftest.hcl +++ /dev/null @@ -1,22 +0,0 @@ - -override_module { - target = module.child - outputs = {} -} - -override_module { - target = module.child - outputs = {} -} - -run "test" { - override_module { - target = module.child - outputs = {} - } - - override_module { - target = module.child - outputs = {} - } -} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_providers.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_providers.tftest.hcl deleted file mode 100644 index a0dac23dfa0f..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_providers.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -provider "aws" {} - -provider "aws" {} - -provider "aws" { - alias = "test" -} - -provider "aws" { - alias = "test" -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/duplicate_resource_overrides.tftest.hcl b/internal/configs/testdata/invalid-test-files/duplicate_resource_overrides.tftest.hcl deleted file mode 100644 index 1494449197a9..000000000000 --- a/internal/configs/testdata/invalid-test-files/duplicate_resource_overrides.tftest.hcl +++ /dev/null @@ -1,33 +0,0 @@ -mock_provider "aws" { - override_resource { - target = aws_instance.test - values = {} - } - - override_resource { - target = aws_instance.test - values = {} - } -} - -override_resource { - target = aws_instance.test - values = {} -} - -override_resource { - target = aws_instance.test - values = {} -} - -run "test" { - override_resource { - target = aws_instance.test - values = {} - } - - override_resource { - target = aws_instance.test - values = {} - } -} diff --git a/internal/configs/testdata/invalid-test-files/invalid_data_override.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_data_override.tftest.hcl deleted file mode 100644 index 3196ce49d96a..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_data_override.tftest.hcl +++ /dev/null @@ -1,10 +0,0 @@ - -override_data { - target = data.aws_instance.target -} - -override_data { - values = {} -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_data_override_target.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_data_override_target.tftest.hcl deleted file mode 100644 index f3172477fdfb..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_data_override_target.tftest.hcl +++ /dev/null @@ -1,12 +0,0 @@ - -override_data { - target = aws_instance.target - values = {} -} - -override_data { - target = module.child - values = {} -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_mock_data_sources.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_mock_data_sources.tftest.hcl deleted file mode 100644 index e45371de8d78..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_mock_data_sources.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -mock_provider "aws" { - - mock_data "aws_instance" {} - - mock_data "aws_ami_instance" { - defaults = { - ami = var.ami - } - } - -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_mock_resources.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_mock_resources.tftest.hcl deleted file mode 100644 index 936e57dc96b1..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_mock_resources.tftest.hcl +++ /dev/null @@ -1,13 +0,0 @@ -mock_provider "aws" { - - mock_resource "aws_instance" {} - - mock_resource "aws_ami_instance" { - defaults = { - ami = var.ami - } - } - -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_module_override.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_module_override.tftest.hcl deleted file mode 100644 index 7562d08796a3..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_module_override.tftest.hcl +++ /dev/null @@ -1,14 +0,0 @@ -override_module { - target = module.child -} - -override_module { - outputs = {} -} - -override_module { - target = module.other - values = {} -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_module_override_target.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_module_override_target.tftest.hcl deleted file mode 100644 index ed5081bc245a..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_module_override_target.tftest.hcl +++ /dev/null @@ -1,12 +0,0 @@ - -override_module { - target = aws_instance.target - outputs = {} -} - -override_module { - target = data.aws_instance.target - outputs = {} -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_resource_override.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_resource_override.tftest.hcl deleted file mode 100644 index 2dd0f6f6b94b..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_resource_override.tftest.hcl +++ /dev/null @@ -1,10 +0,0 @@ - -override_resource { - target = aws_instance.target -} - -override_resource { - values = {} -} - -run "test" {} diff --git a/internal/configs/testdata/invalid-test-files/invalid_resource_override_target.tftest.hcl b/internal/configs/testdata/invalid-test-files/invalid_resource_override_target.tftest.hcl deleted file mode 100644 index ca3e5a317a14..000000000000 --- a/internal/configs/testdata/invalid-test-files/invalid_resource_override_target.tftest.hcl +++ /dev/null @@ -1,12 +0,0 @@ - -override_resource { - target = data.aws_instance.target - values = {} -} - -override_resource { - target = module.child - values = {} -} - -run "test" {} diff --git a/internal/configs/testdata/valid-files/import-for-each.tf b/internal/configs/testdata/valid-files/import-for-each.tf deleted file mode 100644 index 365fa34624fc..000000000000 --- a/internal/configs/testdata/valid-files/import-for-each.tf +++ /dev/null @@ -1,9 +0,0 @@ -import { - for_each = ["a", "b"] - to = test_resource.test[each.value] - id = each.value -} - -resource "test_resource" "test" { - for_each = toset(["a", "b"]) -} diff --git a/internal/configs/testdata/valid-files/import-for-each.tf.json b/internal/configs/testdata/valid-files/import-for-each.tf.json deleted file mode 100644 index ef1be1f5a947..000000000000 --- a/internal/configs/testdata/valid-files/import-for-each.tf.json +++ /dev/null @@ -1,14 +0,0 @@ -{ - "import": { - "for_each": "[\"a\", \"b\"]", - "to": "test_resource.test[each.value]", - "id": "${each.value}" - }, - "resource": { - "test_resource": { - "test": { - "for_each": {"a":"a","b":"b"} - } - } - } -} diff --git a/internal/configs/testdata/valid-modules/with-mocks/child/main.tf b/internal/configs/testdata/valid-modules/with-mocks/child/main.tf deleted file mode 100644 index 1ceca4ccdcbc..000000000000 --- a/internal/configs/testdata/valid-modules/with-mocks/child/main.tf +++ /dev/null @@ -1,8 +0,0 @@ - -output "string" { - value = "Hello, world!" -} - -output "number" { - value = 0 -} diff --git a/internal/configs/testdata/valid-modules/with-mocks/main.tf b/internal/configs/testdata/valid-modules/with-mocks/main.tf deleted file mode 100644 index 572c350aabf5..000000000000 --- a/internal/configs/testdata/valid-modules/with-mocks/main.tf +++ /dev/null @@ -1,19 +0,0 @@ -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - } - } -} - -resource "aws_instance" "first" {} - -resource "aws_instance" "second" {} - -resource "aws_instance" "third" {} - -data "aws_secretsmanager_secret" "creds" {} - -module "child" { - source = "./child" -} diff --git a/internal/configs/testdata/valid-modules/with-mocks/test_case_one.tftest.hcl b/internal/configs/testdata/valid-modules/with-mocks/test_case_one.tftest.hcl deleted file mode 100644 index 5df5f5950b37..000000000000 --- a/internal/configs/testdata/valid-modules/with-mocks/test_case_one.tftest.hcl +++ /dev/null @@ -1,48 +0,0 @@ - -mock_provider "aws" { - - mock_resource "aws_instance" { - defaults = { - arn = "aws:instance" - } - } - - mock_data "aws_secretsmanager_secret" {} - - override_resource { - target = aws_instance.second - values = {} - } - - override_data { - target = data.aws_secretsmanager_secret.creds - values = { - arn = "aws:secretsmanager" - } - } -} - -override_module { - target = module.child - outputs = { - string = "testfile" - number = -1 - } -} - -run "test" { - override_resource { - target = aws_instance.first - values = { - arn = "aws:instance:first" - } - } - - override_module { - target = module.child - outputs = { - string = "testrun" - number = -1 - } - } -} diff --git a/internal/configs/testdata/valid-modules/with-mocks/test_case_two.tftest.hcl b/internal/configs/testdata/valid-modules/with-mocks/test_case_two.tftest.hcl deleted file mode 100644 index 9133e6aeb3f3..000000000000 --- a/internal/configs/testdata/valid-modules/with-mocks/test_case_two.tftest.hcl +++ /dev/null @@ -1,25 +0,0 @@ - -provider "aws" {} - -override_data { - target = data.aws_secretsmanager_secret.creds - values = { - arn = "aws:secretsmanager" - } -} - -run "test" { - override_resource { - target = aws_instance.first - values = { - arn = "aws:instance:first" - } - } - - override_data { - target = data.aws_secretsmanager_secret.creds - values = { - arn = "aws:secretsmanager" - } - } -} diff --git a/internal/lang/funcs/sensitive.go b/internal/lang/funcs/sensitive.go index 97703a4f1326..a08481762952 100644 --- a/internal/lang/funcs/sensitive.go +++ b/internal/lang/funcs/sensitive.go @@ -52,6 +52,9 @@ var NonsensitiveFunc = function.New(&function.Spec{ return args[0].Type(), nil }, Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + if args[0].IsKnown() && !args[0].HasMark(marks.Sensitive) { + return cty.DynamicVal, function.NewArgErrorf(0, "the given value is not sensitive, so this call is redundant") + } v, m := args[0].Unmark() delete(m, marks.Sensitive) // remove the sensitive marking return v.WithMarks(m), nil diff --git a/internal/lang/funcs/sensitive_test.go b/internal/lang/funcs/sensitive_test.go index 96e647d05348..8d510cd8a1da 100644 --- a/internal/lang/funcs/sensitive_test.go +++ b/internal/lang/funcs/sensitive_test.go @@ -130,16 +130,16 @@ func TestNonsensitive(t *testing.T) { ``, }, - // Passing a value that is already non-sensitive is not an error, - // as this function may be used with specific to ensure that all - // values are indeed non-sensitive + // Passing a value that is already non-sensitive is an error, + // because this function should always be used with specific + // intention, not just as a "make everything visible" hammer. { cty.NumberIntVal(1), - ``, + `the given value is not sensitive, so this call is redundant`, }, { cty.NullVal(cty.String), - ``, + `the given value is not sensitive, so this call is redundant`, }, // Unknown values may become sensitive once they are known, so we diff --git a/internal/moduletest/config/config.go b/internal/moduletest/config/config.go deleted file mode 100644 index de793daf506c..000000000000 --- a/internal/moduletest/config/config.go +++ /dev/null @@ -1,136 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package config - -import ( - "fmt" - - "github.com/hashicorp/hcl/v2" - - "github.com/hashicorp/terraform/internal/backend" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/moduletest" - hcltest "github.com/hashicorp/terraform/internal/moduletest/hcl" - "github.com/hashicorp/terraform/internal/terraform" -) - -// TransformConfigForTest transforms the provided configuration ready for the -// test execution specified by the provided run block and test file. -// -// In practice, this actually just means performing some surgery on the -// available providers. We want to copy the relevant providers from the test -// file into the configuration. We also want to process the providers so they -// use variables from the file instead of variables from within the test file. -// -// We also return a reset function that should be called to return the -// configuration to it's original state before the next run block or test file -// needs to use it. -func TransformConfigForTest(config *configs.Config, run *moduletest.Run, file *moduletest.File, availableVariables map[string]backend.UnparsedVariableValue, availableRunBlocks map[string]*terraform.TestContext, requiredProviders map[string]bool) (func(), hcl.Diagnostics) { - var diags hcl.Diagnostics - - // Currently, we only need to override the provider settings. - // - // We can have a set of providers defined within the config, we can also - // have a set of providers defined within the test file. Then the run can - // also specify a set of overrides that tell Terraform exactly which - // providers from the test file to apply into the config. - // - // The process here is as follows: - // 1. Take all the providers in the original config keyed by name.alias, - // we call this `previous` - // 2. Copy them all into a new map, we call this `next`. - // 3a. If the run has configuration specifying provider overrides, we copy - // only the specified providers from the test file into `next`. While - // doing this we ensure to preserve the name and alias from the - // original config. - // 3b. If the run has no override configuration, we copy all the providers - // from the test file into `next`, overriding all providers with name - // collisions from the original config. - // 4. We then modify the original configuration so that the providers it - // holds are the combination specified by the original config, the test - // file and the run file. - // 5. We then return a function that resets the original config back to - // its original state. This can be called by the surrounding test once - // completed so future run blocks can safely execute. - - // First, initialise `previous` and `next`. `previous` contains a backup of - // the providers from the original config. `next` contains the set of - // providers that will be used by the test. `next` starts with the set of - // providers from the original config. - previous := config.Module.ProviderConfigs - next := make(map[string]*configs.Provider) - for key, value := range previous { - next[key] = value - } - - if len(run.Config.Providers) > 0 { - // Then we'll only copy over and overwrite the specific providers asked - // for by this run block. - - for _, ref := range run.Config.Providers { - - testProvider, ok := file.Config.Providers[ref.InParent.String()] - if !ok { - // Then this reference was invalid as we didn't have the - // specified provider in the parent. This should have been - // caught earlier in validation anyway so is unlikely to happen. - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Missing provider definition for %s", ref.InParent.String()), - Detail: "This provider block references a provider definition that does not exist.", - Subject: ref.InParent.NameRange.Ptr(), - }) - continue - } - - next[ref.InChild.String()] = &configs.Provider{ - Name: ref.InChild.Name, - NameRange: ref.InChild.NameRange, - Alias: ref.InChild.Alias, - AliasRange: ref.InChild.AliasRange, - Version: testProvider.Version, - Config: &hcltest.ProviderConfig{ - Original: testProvider.Config, - ConfigVariables: config.Module.Variables, - AvailableVariables: availableVariables, - AvailableRunBlocks: availableRunBlocks, - }, - DeclRange: testProvider.DeclRange, - } - - } - } else { - // Otherwise, let's copy over and overwrite all providers specified by - // the test file itself. - for key, provider := range file.Config.Providers { - - if !requiredProviders[key] { - // Then we don't actually need this provider for this - // configuration, so skip it. - continue - } - - next[key] = &configs.Provider{ - Name: provider.Name, - NameRange: provider.NameRange, - Alias: provider.Alias, - AliasRange: provider.AliasRange, - Version: provider.Version, - Config: &hcltest.ProviderConfig{ - Original: provider.Config, - ConfigVariables: config.Module.Variables, - AvailableVariables: availableVariables, - AvailableRunBlocks: availableRunBlocks, - }, - DeclRange: provider.DeclRange, - } - } - } - - config.Module.ProviderConfigs = next - return func() { - // Reset the original config within the returned function. - config.Module.ProviderConfigs = previous - }, diags -} diff --git a/internal/moduletest/config/config_test.go b/internal/moduletest/config/config_test.go deleted file mode 100644 index b572bac5081b..000000000000 --- a/internal/moduletest/config/config_test.go +++ /dev/null @@ -1,263 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package config - -import ( - "bytes" - "fmt" - "strings" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/google/go-cmp/cmp/cmpopts" - "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/hclparse" - - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/moduletest" -) - -func TestTransformForTest(t *testing.T) { - - str := func(providers map[string]string) string { - var buffer bytes.Buffer - for key, config := range providers { - buffer.WriteString(fmt.Sprintf("%s: %s\n", key, config)) - } - return buffer.String() - } - - convertToProviders := func(t *testing.T, contents map[string]string) map[string]*configs.Provider { - t.Helper() - - providers := make(map[string]*configs.Provider) - for key, content := range contents { - parser := hclparse.NewParser() - file, diags := parser.ParseHCL([]byte(content), fmt.Sprintf("%s.hcl", key)) - if diags.HasErrors() { - t.Fatal(diags.Error()) - } - - provider := &configs.Provider{ - Config: file.Body, - } - - parts := strings.Split(key, ".") - provider.Name = parts[0] - if len(parts) > 1 { - provider.Alias = parts[1] - } - - providers[key] = provider - } - return providers - } - - validate := func(t *testing.T, msg string, expected map[string]string, actual map[string]*configs.Provider) { - t.Helper() - - converted := make(map[string]string) - for key, provider := range actual { - content, err := provider.Config.Content(&hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - {Name: "source", Required: true}, - }, - }) - if err != nil { - t.Fatal(err) - } - - source, diags := content.Attributes["source"].Expr.Value(nil) - if diags.HasErrors() { - t.Fatal(diags.Error()) - } - converted[key] = fmt.Sprintf("source = %q", source.AsString()) - } - - if diff := cmp.Diff(expected, converted); len(diff) > 0 { - t.Errorf("%s\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", msg, str(expected), str(converted), diff) - } - } - - tcs := map[string]struct { - configProviders map[string]string - fileProviders map[string]string - runProviders []configs.PassedProviderConfig - expectedProviders map[string]string - expectedErrors []string - }{ - "empty": { - configProviders: make(map[string]string), - expectedProviders: make(map[string]string), - }, - "only providers in config": { - configProviders: map[string]string{ - "foo": "source = \"config\"", - "bar": "source = \"config\"", - }, - expectedProviders: map[string]string{ - "foo": "source = \"config\"", - "bar": "source = \"config\"", - }, - }, - "only providers in test file": { - configProviders: make(map[string]string), - fileProviders: map[string]string{ - "foo": "source = \"testfile\"", - "bar": "source = \"testfile\"", - }, - expectedProviders: map[string]string{ - "foo": "source = \"testfile\"", - "bar": "source = \"testfile\"", - }, - }, - "only providers in run block": { - configProviders: make(map[string]string), - runProviders: []configs.PassedProviderConfig{ - { - InChild: &configs.ProviderConfigRef{ - Name: "foo", - }, - InParent: &configs.ProviderConfigRef{ - Name: "bar", - }, - }, - }, - expectedProviders: make(map[string]string), - expectedErrors: []string{ - ":0,0-0: Missing provider definition for bar; This provider block references a provider definition that does not exist.", - }, - }, - "subset of providers in test file": { - configProviders: make(map[string]string), - fileProviders: map[string]string{ - "bar": "source = \"testfile\"", - }, - runProviders: []configs.PassedProviderConfig{ - { - InChild: &configs.ProviderConfigRef{ - Name: "foo", - }, - InParent: &configs.ProviderConfigRef{ - Name: "bar", - }, - }, - }, - expectedProviders: map[string]string{ - "foo": "source = \"testfile\"", - }, - }, - "overrides providers in config": { - configProviders: map[string]string{ - "foo": "source = \"config\"", - "bar": "source = \"config\"", - }, - fileProviders: map[string]string{ - "bar": "source = \"testfile\"", - }, - expectedProviders: map[string]string{ - "foo": "source = \"config\"", - "bar": "source = \"testfile\"", - }, - }, - "overrides subset of providers in config": { - configProviders: map[string]string{ - "foo": "source = \"config\"", - "bar": "source = \"config\"", - }, - fileProviders: map[string]string{ - "foo": "source = \"testfile\"", - "bar": "source = \"testfile\"", - }, - runProviders: []configs.PassedProviderConfig{ - { - InChild: &configs.ProviderConfigRef{ - Name: "bar", - }, - InParent: &configs.ProviderConfigRef{ - Name: "bar", - }, - }, - }, - expectedProviders: map[string]string{ - "foo": "source = \"config\"", - "bar": "source = \"testfile\"", - }, - }, - "handles aliases": { - configProviders: map[string]string{ - "foo.primary": "source = \"config\"", - "foo.secondary": "source = \"config\"", - }, - fileProviders: map[string]string{ - "foo": "source = \"testfile\"", - }, - runProviders: []configs.PassedProviderConfig{ - { - InChild: &configs.ProviderConfigRef{ - Name: "foo.secondary", - }, - InParent: &configs.ProviderConfigRef{ - Name: "foo", - }, - }, - }, - expectedProviders: map[string]string{ - "foo.primary": "source = \"config\"", - "foo.secondary": "source = \"testfile\"", - }, - }, - "ignores unexpected providers in test file": { - configProviders: make(map[string]string), - fileProviders: map[string]string{ - "foo": "source = \"testfile\"", - "bar": "source = \"testfile\"", - }, - expectedProviders: map[string]string{ - "foo": "source = \"testfile\"", - }, - }, - } - for name, tc := range tcs { - t.Run(name, func(t *testing.T) { - config := &configs.Config{ - Module: &configs.Module{ - ProviderConfigs: convertToProviders(t, tc.configProviders), - }, - } - - file := &moduletest.File{ - Config: &configs.TestFile{ - Providers: convertToProviders(t, tc.fileProviders), - }, - } - - run := &moduletest.Run{ - Config: &configs.TestRun{ - Providers: tc.runProviders, - }, - } - - availableProviders := make(map[string]bool, len(tc.expectedProviders)) - for provider := range tc.expectedProviders { - availableProviders[provider] = true - } - - reset, diags := TransformConfigForTest(config, run, file, nil, nil, availableProviders) - - var actualErrs []string - for _, err := range diags.Errs() { - actualErrs = append(actualErrs, err.Error()) - } - if diff := cmp.Diff(actualErrs, tc.expectedErrors, cmpopts.IgnoreUnexported()); len(diff) > 0 { - t.Errorf("unmatched errors\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", strings.Join(tc.expectedErrors, "\n"), strings.Join(actualErrs, "\n"), diff) - } - - validate(t, "after transform mismatch", tc.expectedProviders, config.Module.ProviderConfigs) - reset() - validate(t, "after reset mismatch", tc.configProviders, config.Module.ProviderConfigs) - - }) - } -} diff --git a/internal/moduletest/hcl/context.go b/internal/moduletest/hcl/context.go deleted file mode 100644 index 058a51939afc..000000000000 --- a/internal/moduletest/hcl/context.go +++ /dev/null @@ -1,231 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package hcl - -import ( - "fmt" - - "github.com/hashicorp/hcl/v2" - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/lang" - "github.com/hashicorp/terraform/internal/lang/marks" - "github.com/hashicorp/terraform/internal/terraform" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -type EvalContextTarget string - -const ( - TargetRunBlock EvalContextTarget = "run" - TargetProvider EvalContextTarget = "provider" -) - -// EvalContext builds hcl.EvalContext objects for use directly within the -// testing framework. -// -// We support referencing variables from the file variables block, and any -// global variables provided via the CLI / environment variables / .tfvars -// files. These should be provided in the availableVariables argument, already -// parsed and ready for use. -// -// We also support referencing outputs from any previous run blocks. These -// should be provided in the availableRunBlocks argument. As we also perform -// validation (see below) the format of this argument matters. If it is -// completely null, then we do not support the `run` argument at all in this -// context. If a run block is not present at all, then we should return a "run -// block does not exist" error. If the run block is present, but contains a -// nil context, then we should return a "run block has not yet executed" error. -// Finally, if the run block is present and contains a valid value we should -// use that value in the returned HCL contexts. -// -// As referenced above, this function performs pre-validation to make sure the -// expressions to be evaluated will pass evaluation. Anything present in the -// expressions argument will be validated to make sure the only reference the -// availableVariables and availableRunBlocks. -func EvalContext(target EvalContextTarget, expressions []hcl.Expression, availableVariables map[string]cty.Value, availableRunBlocks map[string]*terraform.TestContext) (*hcl.EvalContext, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - runs := make(map[string]cty.Value) - for name, ctx := range availableRunBlocks { - if ctx == nil { - // Then this is a valid run block, but it hasn't executed yet so we - // won't take any values from it. - continue - } - outputs := make(map[string]cty.Value) - for name, config := range ctx.Config.Module.Outputs { - output := ctx.State.OutputValue(addrs.AbsOutputValue{ - OutputValue: addrs.OutputValue{ - Name: name, - }, - Module: addrs.RootModuleInstance, - }) - - var value cty.Value - switch { - case output == nil: - // This means the run block returned null for this output. - // It is likely this will produce an error later if it is - // referenced, but users can actually specify that null - // is an acceptable value for an input variable so we won't - // actually raise a fuss about this at all. - value = cty.NullVal(cty.DynamicPseudoType) - case output.Value.IsNull() || output.Value == cty.NilVal: - // This means the output value was returned as (known after - // apply). If this is referenced it always an error, we - // can't handle this in an appropriate way at all. For now, - // we just mark it as unknown and then later we check and - // resolve all the references. We'll raise an error at that - // point if the user actually attempts to reference a value - // that is unknown. - value = cty.DynamicVal - default: - value = output.Value - } - - if config.Sensitive || (output != nil && output.Sensitive) { - value = value.Mark(marks.Sensitive) - } - - outputs[name] = value - } - - runs[name] = cty.ObjectVal(outputs) - } - - for _, expression := range expressions { - refs, refDiags := lang.ReferencesInExpr(addrs.ParseRefFromTestingScope, expression) - diags = diags.Append(refDiags) - - for _, ref := range refs { - if addr, ok := ref.Subject.(addrs.Run); ok { - ctx, exists := availableRunBlocks[addr.Name] - - var diagPrefix string - switch target { - case TargetRunBlock: - diagPrefix = "You can only reference run blocks that are in the same test file and will execute before the current run block." - case TargetProvider: - diagPrefix = "You can only reference run blocks that are in the same test file and will execute before the provider is required." - } - - if !exists { - // Then this is a made up run block. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Reference to unknown run block", - Detail: fmt.Sprintf("The run block %q does not exist within this test file. %s", addr.Name, diagPrefix), - Subject: ref.SourceRange.ToHCL().Ptr(), - }) - - continue - } - - if ctx == nil { - // This run block exists, but it is after the current run block. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Reference to unavailable run block", - Detail: fmt.Sprintf("The run block %q has not executed yet. %s", addr.Name, diagPrefix), - Subject: ref.SourceRange.ToHCL().Ptr(), - }) - - continue - } - - value, valueDiags := ref.Remaining.TraverseRel(runs[addr.Name]) - diags = diags.Append(valueDiags) - if valueDiags.HasErrors() { - // This means the reference was invalid somehow, we've - // already added the errors to our diagnostics though so - // we'll just carry on. - continue - } - - if !value.IsWhollyKnown() { - // This is not valid, we cannot allow users to pass unknown - // values into run blocks. There's just going to be - // difficult and confusing errors later if this happens. - - if ctx.Run.Config.Command == configs.PlanTestCommand { - // Then the user has likely attempted to use an output - // that is (known after apply) due to the referenced - // run block only being a plan command. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Reference to unknown value", - Detail: fmt.Sprintf("The value for %s is unknown. Run block %q is executing a \"plan\" operation, and the specified output value is only known after apply.", ref.DisplayString(), addr.Name), - Subject: ref.SourceRange.ToHCL().Ptr(), - }) - - continue - } - - // Otherwise, this is a bug in Terraform. We shouldn't be - // producing (known after apply) values during apply - // operations. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Reference to unknown value", - Detail: fmt.Sprintf("The value for %s is unknown; This is a bug in Terraform, please report it.", ref.DisplayString()), - Subject: ref.SourceRange.ToHCL().Ptr(), - }) - } - - continue - } - - if addr, ok := ref.Subject.(addrs.InputVariable); ok { - if _, exists := availableVariables[addr.Name]; !exists { - // This variable reference doesn't exist. - - detail := fmt.Sprintf("The input variable %q is not available to the current run block. You can only reference variables defined at the file or global levels when populating the variables block within a run block.", addr.Name) - if availableRunBlocks == nil { - detail = fmt.Sprintf("The input variable %q is not available to the current provider configuration. You can only reference variables defined at the file or global levels within provider configurations.", addr.Name) - } - - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Reference to unavailable variable", - Detail: detail, - Subject: ref.SourceRange.ToHCL().Ptr(), - }) - - continue - } - - // Otherwise, we're good. This is an acceptable reference. - continue - } - - detail := "You can only reference earlier run blocks, file level, and global variables while defining variables from inside a run block." - if availableRunBlocks == nil { - detail = "You can only reference file level and global variables from inside provider configurations within test files." - } - - // You can only reference run blocks and variables from the run - // block variables. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid reference", - Detail: detail, - Subject: ref.SourceRange.ToHCL().Ptr(), - }) - } - } - - return &hcl.EvalContext{ - Variables: func() map[string]cty.Value { - variables := make(map[string]cty.Value) - variables["var"] = cty.ObjectVal(availableVariables) - if availableRunBlocks != nil { - variables["run"] = cty.ObjectVal(runs) - } - return variables - }(), - }, diags -} diff --git a/internal/moduletest/hcl/provider.go b/internal/moduletest/hcl/provider.go deleted file mode 100644 index 272e28f19bdc..000000000000 --- a/internal/moduletest/hcl/provider.go +++ /dev/null @@ -1,148 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package hcl - -import ( - "github.com/hashicorp/hcl/v2" - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/backend" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/lang" - "github.com/hashicorp/terraform/internal/terraform" -) - -var _ hcl.Body = (*ProviderConfig)(nil) - -// ProviderConfig is an implementation of an hcl.Block that evaluates the -// attributes within the block using the provided config and variables before -// returning them. -// -// This is used by configs.Provider objects that are defined within the test -// framework, so they should only use variables available to the test framework -// but are instead initialised within the Terraform graph so we have to delay -// evaluation of their attributes until the schemas are retrieved. -type ProviderConfig struct { - Original hcl.Body - - ConfigVariables map[string]*configs.Variable - AvailableVariables map[string]backend.UnparsedVariableValue - AvailableRunBlocks map[string]*terraform.TestContext -} - -func (p *ProviderConfig) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) { - content, diags := p.Original.Content(schema) - attrs, attrDiags := p.transformAttributes(content.Attributes) - diags = append(diags, attrDiags...) - - return &hcl.BodyContent{ - Attributes: attrs, - Blocks: p.transformBlocks(content.Blocks), - MissingItemRange: content.MissingItemRange, - }, diags -} - -func (p *ProviderConfig) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) { - content, rest, diags := p.Original.PartialContent(schema) - attrs, attrDiags := p.transformAttributes(content.Attributes) - diags = append(diags, attrDiags...) - - return &hcl.BodyContent{ - Attributes: attrs, - Blocks: p.transformBlocks(content.Blocks), - MissingItemRange: content.MissingItemRange, - }, &ProviderConfig{rest, p.ConfigVariables, p.AvailableVariables, p.AvailableRunBlocks}, diags -} - -func (p *ProviderConfig) JustAttributes() (hcl.Attributes, hcl.Diagnostics) { - originals, diags := p.Original.JustAttributes() - attrs, moreDiags := p.transformAttributes(originals) - return attrs, append(diags, moreDiags...) -} - -func (p *ProviderConfig) MissingItemRange() hcl.Range { - return p.Original.MissingItemRange() -} - -func (p *ProviderConfig) transformAttributes(originals hcl.Attributes) (hcl.Attributes, hcl.Diagnostics) { - var diags hcl.Diagnostics - - availableVariables := make(map[string]cty.Value) - var exprs []hcl.Expression - - for _, original := range originals { - exprs = append(exprs, original.Expr) - - // We also need to parse the variables we're going to use, so we extract - // the references from this expression now and see if they reference any - // input variables. If we find an input variable, we'll copy it into - // our availableVariables local. - refs, _ := lang.ReferencesInExpr(addrs.ParseRefFromTestingScope, original.Expr) - for _, ref := range refs { - if addr, ok := ref.Subject.(addrs.InputVariable); ok { - if _, exists := availableVariables[addr.Name]; exists { - // Then we've processed this variable before. This just - // means it's referenced twice in this provider config - - // which is fine, we just don't need to do it again. - continue - } - - if variable, exists := p.AvailableVariables[addr.Name]; exists { - // Then we have a value for this variable! So we think we'll - // be able to process it - let's parse it now. - - parsingMode := configs.VariableParseHCL - if config, exists := p.ConfigVariables[addr.Name]; exists { - parsingMode = config.ParsingMode - } - - value, valueDiags := variable.ParseVariableValue(parsingMode) - diags = append(diags, valueDiags.ToHCL()...) - if value != nil { - availableVariables[addr.Name] = value.Value - } - } - } - } - } - - ctx, ctxDiags := EvalContext(TargetProvider, exprs, availableVariables, p.AvailableRunBlocks) - diags = append(diags, ctxDiags.ToHCL()...) - if ctxDiags.HasErrors() { - return nil, diags - } - - attrs := make(hcl.Attributes, len(originals)) - for name, attr := range originals { - value, valueDiags := attr.Expr.Value(ctx) - diags = append(diags, valueDiags...) - if valueDiags.HasErrors() { - continue - } else { - attrs[name] = &hcl.Attribute{ - Name: name, - Expr: hcl.StaticExpr(value, attr.Expr.Range()), - Range: attr.Range, - NameRange: attr.NameRange, - } - } - } - return attrs, diags -} - -func (p *ProviderConfig) transformBlocks(originals hcl.Blocks) hcl.Blocks { - blocks := make(hcl.Blocks, len(originals)) - for name, block := range originals { - blocks[name] = &hcl.Block{ - Type: block.Type, - Labels: block.Labels, - Body: &ProviderConfig{block.Body, p.ConfigVariables, p.AvailableVariables, p.AvailableRunBlocks}, - DefRange: block.DefRange, - TypeRange: block.TypeRange, - LabelRanges: block.LabelRanges, - } - } - return blocks -} diff --git a/internal/moduletest/hcl/provider_test.go b/internal/moduletest/hcl/provider_test.go deleted file mode 100644 index f2bf3e591c4c..000000000000 --- a/internal/moduletest/hcl/provider_test.go +++ /dev/null @@ -1,272 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package hcl - -import ( - "strings" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/hclsyntax" - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/backend" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/states" - "github.com/hashicorp/terraform/internal/terraform" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -func TestProviderConfig(t *testing.T) { - - tcs := map[string]struct { - content string - schema *hcl.BodySchema - variables map[string]cty.Value - runBlockOutputs map[string]map[string]cty.Value - validate func(t *testing.T, content *hcl.BodyContent) - expectedErrors []string - }{ - "simple_no_vars": { - content: "attribute = \"string\"", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - equals(t, content, "attribute", cty.StringVal("string")) - }, - }, - "simple_var_ref": { - content: "attribute = var.input", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - variables: map[string]cty.Value{ - "input": cty.StringVal("string"), - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - equals(t, content, "attribute", cty.StringVal("string")) - }, - }, - "missing_var_ref": { - content: "attribute = var.missing", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - variables: map[string]cty.Value{ - "input": cty.StringVal("string"), - }, - expectedErrors: []string{ - "The input variable \"missing\" is not available to the current run block. You can only reference variables defined at the file or global levels when populating the variables block within a run block.", - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - if len(content.Attributes) > 0 { - t.Errorf("should have excluded the invalid attribute but found %d", len(content.Attributes)) - } - }, - }, - "simple_run_block": { - content: "attribute = run.setup.value", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - runBlockOutputs: map[string]map[string]cty.Value{ - "setup": { - "value": cty.StringVal("string"), - }, - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - equals(t, content, "attribute", cty.StringVal("string")) - }, - }, - "missing_run_block": { - content: "attribute = run.missing.value", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - runBlockOutputs: map[string]map[string]cty.Value{ - "setup": { - "value": cty.StringVal("string"), - }, - }, - expectedErrors: []string{ - "The run block \"missing\" does not exist within this test file. You can only reference run blocks that are in the same test file and will execute before the provider is required.", - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - if len(content.Attributes) > 0 { - t.Errorf("should have excluded the invalid attribute but found %d", len(content.Attributes)) - } - }, - }, - "late_run_block": { - content: "attribute = run.setup.value", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - runBlockOutputs: map[string]map[string]cty.Value{ - "setup": nil, - }, - expectedErrors: []string{ - "The run block \"setup\" has not executed yet. You can only reference run blocks that are in the same test file and will execute before the provider is required.", - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - if len(content.Attributes) > 0 { - t.Errorf("should have excluded the invalid attribute but found %d", len(content.Attributes)) - } - }, - }, - "invalid_ref": { - content: "attribute = data.type.name.value", - schema: &hcl.BodySchema{ - Attributes: []hcl.AttributeSchema{ - { - Name: "attribute", - }, - }, - }, - runBlockOutputs: map[string]map[string]cty.Value{ - "setup": nil, - }, - expectedErrors: []string{ - "You can only reference earlier run blocks, file level, and global variables while defining variables from inside a run block.", - }, - validate: func(t *testing.T, content *hcl.BodyContent) { - if len(content.Attributes) > 0 { - t.Errorf("should have excluded the invalid attribute but found %d", len(content.Attributes)) - } - }, - }, - } - - for name, tc := range tcs { - t.Run(name, func(t *testing.T) { - - file, diags := hclsyntax.ParseConfig([]byte(tc.content), "main.tf", hcl.Pos{Line: 1, Column: 1}) - if diags.HasErrors() { - t.Fatalf("failed to parse hcl: %s", diags.Error()) - } - - config := ProviderConfig{ - Original: file.Body, - ConfigVariables: func() map[string]*configs.Variable { - variables := make(map[string]*configs.Variable) - for variable := range tc.variables { - variables[variable] = &configs.Variable{ - Name: variable, - } - } - return variables - }(), - AvailableVariables: func() map[string]backend.UnparsedVariableValue { - variables := make(map[string]backend.UnparsedVariableValue) - for name, value := range tc.variables { - variables[name] = &variable{value} - } - return variables - }(), - AvailableRunBlocks: func() map[string]*terraform.TestContext { - statuses := make(map[string]*terraform.TestContext) - for name, values := range tc.runBlockOutputs { - if values == nil { - statuses[name] = nil - continue - } - - state := states.BuildState(func(state *states.SyncState) { - for name, value := range values { - state.SetOutputValue(addrs.AbsOutputValue{ - Module: addrs.RootModuleInstance, - OutputValue: addrs.OutputValue{ - Name: name, - }, - }, value, false) - } - }) - - config := &configs.Config{ - Module: &configs.Module{ - Outputs: func() map[string]*configs.Output { - outputs := make(map[string]*configs.Output) - for name := range values { - outputs[name] = &configs.Output{ - Name: name, - } - } - return outputs - }(), - }, - } - - statuses[name] = &terraform.TestContext{ - Config: config, - State: state, - } - } - return statuses - }(), - } - - content, diags := config.Content(tc.schema) - - var actualErrs []string - for _, diag := range diags { - actualErrs = append(actualErrs, diag.Detail) - } - if diff := cmp.Diff(actualErrs, tc.expectedErrors); len(diff) > 0 { - t.Errorf("unmatched errors\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", strings.Join(tc.expectedErrors, "\n"), strings.Join(actualErrs, "\n"), diff) - } - - tc.validate(t, content) - }) - } -} - -func equals(t *testing.T, content *hcl.BodyContent, attribute string, expected cty.Value) { - value, diags := content.Attributes[attribute].Expr.Value(nil) - if diags.HasErrors() { - t.Errorf("failed to get value from attribute %s: %s", attribute, diags.Error()) - } - if !value.RawEquals(expected) { - t.Errorf("expected:\n%s\nbut got:\n%s", expected.GoString(), value.GoString()) - } -} - -var _ backend.UnparsedVariableValue = (*variable)(nil) - -type variable struct { - value cty.Value -} - -func (v *variable) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) { - return &terraform.InputValue{ - Value: v.value, - SourceType: terraform.ValueFromUnknown, - }, nil -} diff --git a/internal/moduletest/mocking/overrides.go b/internal/moduletest/mocking/overrides.go deleted file mode 100644 index 3f6ed5bfa009..000000000000 --- a/internal/moduletest/mocking/overrides.go +++ /dev/null @@ -1,204 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package mocking - -import ( - "fmt" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" -) - -// Overrides contains a summary of all the overrides that should apply for a -// test run. -// -// This requires us to deduplicate between run blocks and test files, and mock -// providers. -type Overrides struct { - providerOverrides map[string]addrs.Map[addrs.Targetable, *configs.Override] - localOverrides addrs.Map[addrs.Targetable, *configs.Override] -} - -func PackageOverrides(run *configs.TestRun, file *configs.TestFile, config *configs.Config) *Overrides { - overrides := &Overrides{ - providerOverrides: make(map[string]addrs.Map[addrs.Targetable, *configs.Override]), - localOverrides: addrs.MakeMap[addrs.Targetable, *configs.Override](), - } - - // The run block overrides have the highest priority, we always include all - // of them. - for _, elem := range run.Overrides.Elems { - overrides.localOverrides.PutElement(elem) - } - - // The file overrides are second, we include these as long as there isn't - // a direct replacement in the current run block or the run block doesn't - // override an entire module that a file override would be inside. - for _, elem := range file.Overrides.Elems { - target := elem.Key - - if overrides.localOverrides.Has(target) { - // The run block provided a value already. - continue - } - - overrides.localOverrides.PutElement(elem) - } - - // Finally, we want to include the overrides for any mock providers we have. - for key, provider := range config.Module.ProviderConfigs { - if !provider.Mock { - // Only mock providers can supply overrides. - continue - } - - for _, elem := range provider.MockData.Overrides.Elems { - target := elem.Key - - if overrides.localOverrides.Has(target) { - // Then the file or the run block is providing an override with - // higher precedence. - continue - } - - if _, exists := overrides.providerOverrides[key]; !exists { - overrides.providerOverrides[key] = addrs.MakeMap[addrs.Targetable, *configs.Override]() - } - overrides.providerOverrides[key].PutElement(elem) - } - } - - return overrides -} - -// IsOverridden returns true if the module is either overridden directly or -// nested within another module that is already being overridden. -// -// For this function, we know that overrides defined within mock providers -// cannot target modules directly. Therefore, we only need to check the local -// overrides within this function. -func (overrides *Overrides) IsOverridden(module addrs.ModuleInstance) bool { - if overrides.localOverrides.Has(module) { - // Short circuit things, if we have an exact match just return now. - return true - } - - // Otherwise, check for parents. - for _, elem := range overrides.localOverrides.Elems { - if elem.Key.TargetContains(module) { - // Then we have an ancestor of module being overridden instead of - // module being overridden directly. - return true - } - } - - return false -} - -// IsDeeplyOverridden returns true if an ancestor of this module is overridden -// but not if the module is overridden directly. -// -// This function doesn't consider an instanced module to be deeply overridden -// by the uninstanced reference to the same module. So, -// IsDeeplyOverridden("mod.child[0]") would return false if "mod.child" has been -// overridden. -// -// For this function, we know that overrides defined within mock providers -// cannot target modules directly. Therefore, we only need to check the local -// overrides within this function. -func (overrides *Overrides) IsDeeplyOverridden(module addrs.ModuleInstance) bool { - for _, elem := range overrides.localOverrides.Elems { - target := elem.Key - - if target.TargetContains(module) { - // So we do think it contains it, but it could be matching here - // because of equality or because we have an instanced module. - if instance, ok := target.(addrs.ModuleInstance); ok { - if instance.Equal(module) { - // Then we're exactly equal, so not deeply nested. - continue - } - - if instance.Module().Equal(module.Module()) { - // Then we're an instanced version of they other one, so - // also not deeply nested by our definition of deeply. - continue - } - - } - - // Otherwise, it's deeply nested. - return true - } - } - return false -} - -// GetOverrideInclProviders retrieves the override for target if it exists. -// -// This function also checks the provider specific overrides using the provider -// argument. -func (overrides *Overrides) GetOverrideInclProviders(target addrs.Targetable, provider addrs.AbsProviderConfig) (*configs.Override, bool) { - // If we have a local override, then apply that first. - if override, ok := overrides.GetOverride(target); ok { - return override, true - } - - // Otherwise, check if we have overrides for this provider. - providerOverrides, ok := overrides.ProviderMatch(provider) - if ok { - if override, ok := providerOverrides.GetOk(target); ok { - return override, true - } - } - - // If we have no overrides, that's okay. - return nil, false -} - -// GetOverride retrieves the override for target from the local overrides if -// it exists. -func (overrides *Overrides) GetOverride(target addrs.Targetable) (*configs.Override, bool) { - return overrides.localOverrides.GetOk(target) -} - -// ProviderMatch returns true if we have overrides for the given provider. -// -// This is so that we can selectively apply overrides to resources that are -// being supplied by a given provider. -func (overrides *Overrides) ProviderMatch(provider addrs.AbsProviderConfig) (addrs.Map[addrs.Targetable, *configs.Override], bool) { - if !provider.Module.IsRoot() { - // We can only set mock providers within the root module. - return addrs.Map[addrs.Targetable, *configs.Override]{}, false - } - - name := provider.Provider.Type - if len(provider.Alias) > 0 { - name = fmt.Sprintf("%s.%s", name, provider.Alias) - } - - data, exists := overrides.providerOverrides[name] - return data, exists -} - -// Empty returns true if we have no actual overrides. -// -// This is just a convenience function to make checking for overrides easier. -func (overrides *Overrides) Empty() bool { - if overrides == nil { - return true - } - - if overrides.localOverrides.Len() > 0 { - return false - } - - for _, value := range overrides.providerOverrides { - if value.Len() > 0 { - return false - } - } - - return true -} diff --git a/internal/moduletest/mocking/overrides_test.go b/internal/moduletest/mocking/overrides_test.go deleted file mode 100644 index 7196f986e387..000000000000 --- a/internal/moduletest/mocking/overrides_test.go +++ /dev/null @@ -1,119 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package mocking - -import ( - "testing" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" -) - -func TestPackageOverrides(t *testing.T) { - mustResourceInstance := func(s string) addrs.AbsResourceInstance { - addr, diags := addrs.ParseAbsResourceInstanceStr(s) - if len(diags) > 0 { - t.Fatal(diags) - } - return addr - } - - primary := mustResourceInstance("test_instance.primary") - secondary := mustResourceInstance("test_instance.secondary") - tertiary := mustResourceInstance("test_instance.tertiary") - - testrun := mustResourceInstance("test_instance.test_run") - testfile := mustResourceInstance("test_instance.test_file") - provider := mustResourceInstance("test_instance.provider") - - // Add a single override to the test run. - run := &configs.TestRun{ - Overrides: addrs.MakeMap[addrs.Targetable, *configs.Override](), - } - run.Overrides.Put(primary, &configs.Override{ - Target: &addrs.Target{ - Subject: testrun, - }, - }) - - // Add a unique item to the test file, and duplicate the test run data. - file := &configs.TestFile{ - Overrides: addrs.MakeMap[addrs.Targetable, *configs.Override](), - } - file.Overrides.Put(primary, &configs.Override{ - Target: &addrs.Target{ - Subject: testfile, - }, - }) - file.Overrides.Put(secondary, &configs.Override{ - Target: &addrs.Target{ - Subject: testfile, - }, - }) - - // Add all data from the file and run block are duplicating here, and then - // a unique one. - config := &configs.Config{ - Module: &configs.Module{ - ProviderConfigs: map[string]*configs.Provider{ - "mock": { - Mock: true, - MockData: &configs.MockData{ - Overrides: addrs.MakeMap[addrs.Targetable, *configs.Override](), - }, - }, - "real": {}, - }, - }, - } - config.Module.ProviderConfigs["mock"].MockData.Overrides.Put(primary, &configs.Override{ - Target: &addrs.Target{ - Subject: provider, - }, - }) - config.Module.ProviderConfigs["mock"].MockData.Overrides.Put(secondary, &configs.Override{ - Target: &addrs.Target{ - Subject: provider, - }, - }) - config.Module.ProviderConfigs["mock"].MockData.Overrides.Put(tertiary, &configs.Override{ - Target: &addrs.Target{ - Subject: provider, - }, - }) - - overrides := PackageOverrides(run, file, config) - - // We now expect that the run and file overrides took precedence. - first, pOk := overrides.GetOverride(primary) - second, sOk := overrides.GetOverride(secondary) - third, tOk := overrides.GetOverrideInclProviders(tertiary, addrs.AbsProviderConfig{ - Provider: addrs.Provider{ - Type: "mock", - }, - }) - - if !pOk || !sOk || !tOk { - t.Fatalf("expected to find all overrides, but got %t %t %t", pOk, sOk, tOk) - } - - if !first.Target.Subject.(addrs.AbsResourceInstance).Equal(testrun) { - t.Errorf("expected %s but got %s for primary", testrun, first.Target.Subject) - } - - if !second.Target.Subject.(addrs.AbsResourceInstance).Equal(testfile) { - t.Errorf("expected %s but got %s for primary", testfile, second.Target.Subject) - } - - if !third.Target.Subject.(addrs.AbsResourceInstance).Equal(provider) { - t.Errorf("expected %s but got %s for primary", provider, third.Target.Subject) - } - - // Also, final sanity check. - _, ok := overrides.providerOverrides["real"] - if ok { - t.Errorf("shouldn't have stored the real provider but did") - } - -} diff --git a/internal/moduletest/mocking/testing.go b/internal/moduletest/mocking/testing.go deleted file mode 100644 index 6ec498f3153a..000000000000 --- a/internal/moduletest/mocking/testing.go +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package mocking - -import ( - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" -) - -type InitProviderOverrides func(map[string]addrs.Map[addrs.Targetable, *configs.Override]) -type InitLocalOverrides func(addrs.Map[addrs.Targetable, *configs.Override]) - -func OverridesForTesting(providers InitProviderOverrides, locals InitLocalOverrides) *Overrides { - overrides := &Overrides{ - providerOverrides: make(map[string]addrs.Map[addrs.Targetable, *configs.Override]), - localOverrides: addrs.MakeMap[addrs.Targetable, *configs.Override](), - } - - if providers != nil { - providers(overrides.providerOverrides) - } - - if locals != nil { - locals(overrides.localOverrides) - } - - return overrides -} diff --git a/internal/moduletest/mocking/values.go b/internal/moduletest/mocking/values.go deleted file mode 100644 index 2d9d8c440ef0..000000000000 --- a/internal/moduletest/mocking/values.go +++ /dev/null @@ -1,329 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package mocking - -import ( - "fmt" - "math/rand" - - "github.com/hashicorp/hcl/v2" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" - - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -var ( - // testRand and chars are used to generate random strings for the computed - // values. - // - // If testRand is null, then the global random is used. This allows us to - // seed tests for repeatable results. - testRand *rand.Rand - chars = []rune("abcdefghijklmnopqrstuvwxyz0123456789") -) - -// PlanComputedValuesForResource accepts a target value, and populates it with -// cty.UnknownValues wherever a value should be computed during the apply stage. -// -// This method basically simulates the behaviour of a plan request in a real -// provider. -func PlanComputedValuesForResource(original cty.Value, schema *configschema.Block) (cty.Value, tfdiags.Diagnostics) { - return populateComputedValues(original, ReplacementValue{}, schema, isNull, makeUnknown) -} - -// ApplyComputedValuesForResource accepts a target value, and populates it -// either with values from the provided with argument, or with generated values -// created semi-randomly. This will only target values that are computed and -// unknown. -// -// This method basically simulates the behaviour of an apply request in a real -// provider. -func ApplyComputedValuesForResource(original cty.Value, with ReplacementValue, schema *configschema.Block) (cty.Value, tfdiags.Diagnostics) { - return populateComputedValues(original, with, schema, isUnknown, with.makeKnown) -} - -// ComputedValuesForDataSource accepts a target value, and populates it either -// with values from the provided with argument, or with generated values created -// semi-randomly. This will only target values that are computed and null. -// -// This function does what PlanComputedValuesForResource and -// ApplyComputedValuesForResource do but in a single step with no intermediary -// unknown stage. -// -// This method basically simulates the behaviour of a get data source request -// in a real provider. -func ComputedValuesForDataSource(original cty.Value, with ReplacementValue, schema *configschema.Block) (cty.Value, tfdiags.Diagnostics) { - return populateComputedValues(original, with, schema, isNull, with.makeKnown) -} - -type processValue func(value cty.Value) bool - -type populateValue func(value cty.Value, with cty.Value, path cty.Path) (cty.Value, tfdiags.Diagnostics) - -func populateComputedValues(target cty.Value, with ReplacementValue, schema *configschema.Block, processValue processValue, populateValue populateValue) (cty.Value, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - if !with.validate() { - // This is actually a user error, it means the user wrote something like - // `values = "not an object"` when defining the replacement values for - // this in the mock or test file. We should have caught this earlier in - // the validation, but we want this function to be robust and not panic - // so we'll check again just in case. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid replacement value", - Detail: fmt.Sprintf("The requested replacement value must be an object type, but was %s.", with.Value.Type().FriendlyName()), - Subject: with.Range.Ptr(), - }) - } - - // We're going to search for any elements within the target value that meet - // the joint criteria of being computed and whatever processValue is - // checking. - // - // We'll then replace anything that meets the criteria with the output of - // populateValue. - // - // This transform should be robust (in that it should never fail), it'll - // populate the external diags variable with any values it should have - // replaced but couldn't and just return the original value. - value, err := cty.Transform(target, func(path cty.Path, target cty.Value) (cty.Value, error) { - - // Get the attribute for the current target. - attribute := schema.AttributeByPath(path) - - if attribute == nil { - // Then this is an intermediate path which does not represent an - // attribute, and it cannot be computed. - return target, nil - } - - // Now, we check if we should be replacing this value with something. - if attribute.Computed && processValue(target) { - - // Get the value we should be replacing target with. - replacement, replacementDiags := with.getReplacementSafe(path) - diags = diags.Append(replacementDiags) - - // Upstream code (in node_resource_abstract_instance.go) expects - // us to return a valid object (even if we have errors). That means - // no unknown values, no cty.NilVals, etc. So, we're going to go - // ahead and call populateValue with whatever getReplacementSafe - // gave us. getReplacementSafe is robust, so even in an error it - // should have given us something we can use in populateValue. - - // Now get the replacement value. This function should be robust in - // that it may return diagnostics explaining why it couldn't replace - // the value, but it'll still return a value for us to use. - value, valueDiags := populateValue(target, replacement, path) - diags = diags.Append(valueDiags) - - // We always return a valid value, the diags are attached to the - // global diags outside the nested function. - return value, nil - } - - // If we don't need to replace this value, then just return it - // untouched. - return target, nil - }) - if err != nil { - // This shouldn't actually happen - we never return an error from inside - // the transform function. But, just in case: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Detail: "Failed to generate values", - Summary: fmt.Sprintf("Terraform failed to generate computed values for a mocked resource, data source, or module: %s. This is a bug in Terraform - please report it.", err), - Subject: with.Range.Ptr(), - }) - } - - return value, diags -} - -func isNull(target cty.Value) bool { - return target.IsNull() -} - -func isUnknown(target cty.Value) bool { - return !target.IsKnown() -} - -func makeUnknown(target, _ cty.Value, _ cty.Path) (cty.Value, tfdiags.Diagnostics) { - return cty.UnknownVal(target.Type()), nil -} - -// ReplacementValue is just a helper struct that wraps the think we're -// interested in (the value) with some metadata that will make our diagnostics -// a bit more helpful. -type ReplacementValue struct { - Value cty.Value - Range hcl.Range -} - -func (replacement ReplacementValue) makeKnown(target, with cty.Value, path cty.Path) (cty.Value, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - if with != cty.NilVal { - // Then we have a pre-made value to replace it with. We'll make sure it - // is compatible with a conversion, and then just return it in place. - - if value, err := convert.Convert(with, target.Type()); err != nil { - diags = diags.Append(tfdiags.AttributeValue( - tfdiags.Error, - "Failed to replace target attribute", - fmt.Sprintf("Terraform could not replace the target type %s with the replacement value defined at %s within %s: %s.", target.Type().FriendlyName(), fmtPath(path), replacement.Range, err), - path)) - - // We still want to return a valid value here. If the conversion did - // not work we carry on and just create a value instead. We've made - // a note of the diagnostics tracking why it didn't work so the - // overall operation will still fail, but we won't crash later on - // because of an unknown value or something. - - } else { - // Successful conversion! We can just return the new value. - return value, diags - } - } - - // Otherwise, we'll have to generate some values. - // We just return zero values for most of the types. The only exceptions are - // objects and strings. For strings, we generate 8 random alphanumeric - // characters. Objects need to be valid types, so we recurse through the - // attributes and recursively call this function to generate values for - // each attribute. - - switch { - case target.Type().IsPrimitiveType(): - switch target.Type() { - case cty.String: - return cty.StringVal(str(8)), diags - case cty.Number: - return cty.Zero, diags - case cty.Bool: - return cty.False, diags - default: - panic(fmt.Errorf("unknown primitive type: %s", target.Type().FriendlyName())) - } - case target.Type().IsListType(): - return cty.ListValEmpty(target.Type().ElementType()), diags - case target.Type().IsSetType(): - return cty.SetValEmpty(target.Type().ElementType()), diags - case target.Type().IsMapType(): - return cty.MapValEmpty(target.Type().ElementType()), diags - case target.Type().IsObjectType(): - children := make(map[string]cty.Value) - for name, attribute := range target.Type().AttributeTypes() { - child, childDiags := replacement.makeKnown(cty.UnknownVal(attribute), cty.NilVal, path.GetAttr(name)) - diags = diags.Append(childDiags) - children[name] = child - } - return cty.ObjectVal(children), diags - default: - panic(fmt.Errorf("unknown complex type: %s", target.Type().FriendlyName())) - } -} - -// We can only do replacements if the replacement value is an object type. -func (replacement ReplacementValue) validate() bool { - return replacement.Value == cty.NilVal || replacement.Value.Type().IsObjectType() -} - -// getReplacementSafe walks the path to find any potential replacement value for -// a given path. We have implemented custom logic for walking the path here. -// -// This is to support nested block types. It's complicated to work out how to -// replace computed values within nested types. For example, how would a user -// say they just want to replace values at index 3? Or how would users indicate -// they want to replace anything at all within nested sets. The indices for sets -// will never be the same because the user supplied values will, by design, have -// values for the computed attributes which will be null or unknown within the -// values from Terraform so the paths will never match. -// -// What the above paragraph means is that for nested blocks and attributes, -// users can only specify a single replacement value that will apply to all -// the values within the nested collection. -// -// TODO(liamcervante): Revisit this function, is it possible and/or easy for us -// to support specific targeting of elements in collections? -func (replacement ReplacementValue) getReplacementSafe(path cty.Path) (cty.Value, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - if replacement.Value == cty.NilVal { - return cty.NilVal, diags - } - - // We want to provide a nice print out of the path in case of an error. - // We'll format it as we go. - var currentPath cty.Path - - // We are copying the implementation within AttributeByPath inside the - // schema for this. We skip over GetIndexSteps as they'll be referring to - // the intermediate nested blocks and attributes that we aren't capturing - // within the user supplied mock values. - current := replacement.Value - for _, step := range path { - switch step := step.(type) { - case cty.GetAttrStep: - - if !current.Type().IsObjectType() { - // As we're still traversing the path, we expect things to be - // objects at every level. - diags = diags.Append(tfdiags.AttributeValue( - tfdiags.Error, - "Failed to replace target attribute", - fmt.Sprintf("Terraform expected an object type at %s within the replacement value defined at %s, but found %s.", fmtPath(currentPath), replacement.Range, current.Type().FriendlyName()), - currentPath)) - - return cty.NilVal, diags - } - - if !current.Type().HasAttribute(step.Name) { - // Then we're not providing a replacement value for this path. - return cty.NilVal, diags - } - - current = current.GetAttr(step.Name) - } - - currentPath = append(currentPath, step) - } - - return current, diags -} - -func fmtPath(path cty.Path) string { - var current string - - first := true - for _, step := range path { - // Since we only ever parse the attribute steps when finding replacement - // values, we can do the same again here. - switch step := step.(type) { - case cty.GetAttrStep: - if first { - first = false - current = step.Name - continue - } - current = fmt.Sprintf("%s.%s", current, step.Name) - } - } - return current -} - -func str(n int) string { - b := make([]rune, n) - for i := range b { - if testRand != nil { - b[i] = chars[testRand.Intn(len(chars))] - } else { - b[i] = chars[rand.Intn(len(chars))] - } - } - return string(b) -} diff --git a/internal/moduletest/mocking/values_test.go b/internal/moduletest/mocking/values_test.go deleted file mode 100644 index 4824b7c999de..000000000000 --- a/internal/moduletest/mocking/values_test.go +++ /dev/null @@ -1,743 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package mocking - -import ( - "math/rand" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/configs/configschema" -) - -var ( - normalAttributes = map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - }, - "value": { - Type: cty.String, - }, - } - - computedAttributes = map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - "value": { - Type: cty.String, - }, - } - - normalBlock = configschema.Block{ - Attributes: normalAttributes, - } - - computedBlock = configschema.Block{ - Attributes: computedAttributes, - } -) - -func TestComputedValuesForDataSource(t *testing.T) { - tcs := map[string]struct { - target cty.Value - with cty.Value - schema *configschema.Block - expected cty.Value - expectedFailures []string - }{ - "nil_target_no_unknowns": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.NilVal, - schema: &normalBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "empty_target_no_unknowns": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.EmptyObjectVal, - schema: &normalBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "basic_computed_attribute_preset": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.NilVal, - schema: &computedBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "basic_computed_attribute_random": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.NilVal, - schema: &computedBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "basic_computed_attribute_supplied": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - schema: &computedBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "nested_single_block_preset": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("Hello, world!"), - }), - }), - with: cty.NilVal, - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingSingle, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("Hello, world!"), - }), - }), - }, - "nested_single_block_supplied": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("Hello, world!"), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingSingle, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("Hello, world!"), - }), - }), - }, - "nested_list_block_preset": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.NilVal, - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingList, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("amyllmyg"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_list_block_supplied": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingList, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_set_block_preset": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.NilVal, - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingSet, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("amyllmyg"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_set_block_supplied": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingSet, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_map_block_preset": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - "two": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.NilVal, - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingMap, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("one"), - }), - "two": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("amyllmyg"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_map_block_supplied": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - "two": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingMap, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("one"), - }), - "two": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_single_attribute": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("Hello, world!"), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "nested": { - NestedType: &configschema.Object{ - Attributes: computedAttributes, - Nesting: configschema.NestingSingle, - }, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("Hello, world!"), - }), - }), - }, - "nested_list_attribute": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "nested": { - NestedType: &configschema.Object{ - Attributes: computedAttributes, - Nesting: configschema.NestingList, - }, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_set_attribute": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "nested": { - NestedType: &configschema.Object{ - Attributes: computedAttributes, - Nesting: configschema.NestingSet, - }, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("one"), - }), - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "nested_map_attribute": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - "two": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("two"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - }), - }), - schema: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "nested": { - NestedType: &configschema.Object{ - Attributes: computedAttributes, - Nesting: configschema.NestingMap, - }, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("one"), - }), - "two": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("myvalue"), - "value": cty.StringVal("two"), - }), - }), - }), - }, - "invalid_replacement_path": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.StringVal("Hello, world!"), - schema: &normalBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("kj87eb9"), - "value": cty.StringVal("Hello, world!"), - }), - expectedFailures: []string{ - "The requested replacement value must be an object type, but was string.", - }, - }, - "invalid_replacement_path_nested": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested_object": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested_object": cty.StringVal("Hello, world!"), - }), - schema: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "nested_object": { - NestedType: &configschema.Object{ - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - }, - Nesting: configschema.NestingSet, - }, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested_object": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - }), - }), - }), - expectedFailures: []string{ - "Terraform expected an object type at nested_object within the replacement value defined at :0,0-0, but found string.", - }, - }, - "invalid_replacement_path_nested_block": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested_object": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested_object": cty.StringVal("Hello, world!"), - }), - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "nested_object": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - }, - }, - Nesting: configschema.NestingSet, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested_object": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - }), - }), - }), - expectedFailures: []string{ - "Terraform expected an object type at nested_object within the replacement value defined at :0,0-0, but found string.", - }, - }, - "invalid_replacement_type": { - target: cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("Hello, world!"), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "id": cty.ListValEmpty(cty.String), - }), - schema: &computedBlock, - expected: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("Hello, world!"), - }), - expectedFailures: []string{ - "Terraform could not replace the target type string with the replacement value defined at id within :0,0-0: string required.", - }, - }, - "invalid_replacement_type_nested": { - target: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.ObjectVal(map[string]cty.Value{ - "id": cty.EmptyObjectVal, - }), - }), - schema: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "nested": { - NestedType: &configschema.Object{ - Attributes: computedAttributes, - Nesting: configschema.NestingMap, - }, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "nested": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("one"), - }), - }), - }), - expectedFailures: []string{ - "Terraform could not replace the target type string with the replacement value defined at nested.id within :0,0-0: string required.", - }, - }, - "invalid_replacement_type_nested_block": { - target: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.NullVal(cty.String), - "value": cty.StringVal("one"), - }), - }), - }), - with: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ObjectVal(map[string]cty.Value{ - "id": cty.EmptyObjectVal, - }), - }), - schema: &configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "block": { - Block: computedBlock, - Nesting: configschema.NestingList, - }, - }, - }, - expected: cty.ObjectVal(map[string]cty.Value{ - "block": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("ssnk9qhr"), - "value": cty.StringVal("one"), - }), - }), - }), - expectedFailures: []string{ - "Terraform could not replace the target type string with the replacement value defined at block.id within :0,0-0: string required.", - }, - }, - } - - for name, tc := range tcs { - t.Run(name, func(t *testing.T) { - - // We'll just make sure that any random strings are deterministic. - testRand = rand.New(rand.NewSource(0)) - defer func() { - testRand = nil - }() - - actual, diags := ComputedValuesForDataSource(tc.target, ReplacementValue{ - Value: tc.with, - }, tc.schema) - - var actualFailures []string - for _, diag := range diags { - actualFailures = append(actualFailures, diag.Description().Detail) - } - if diff := cmp.Diff(tc.expectedFailures, actualFailures); len(diff) > 0 { - t.Errorf("unexpected failures\nexpected:\n%s\nactual:\n%s\ndiff:\n%s", tc.expectedFailures, actualFailures, diff) - } - - if actual.Equals(tc.expected).False() { - t.Errorf("\nexpected: (%s)\nactual: (%s)", tc.expected.GoString(), actual.GoString()) - } - }) - } -} diff --git a/internal/plans/plan.go b/internal/plans/plan.go index 1d1873344f28..18ef1a364229 100644 --- a/internal/plans/plan.go +++ b/internal/plans/plan.go @@ -12,7 +12,6 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/lang/globalref" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/states" ) @@ -113,13 +112,6 @@ type Plan struct { // representation of the plan. ExternalReferences []*addrs.Reference - // Overrides contains the set of overrides that were applied while making - // this plan. We need to provide the same set of overrides when applying - // the plan so we preserve them here. As with PlannedState and - // ExternalReferences, this is only used by the testing framework and so - // isn't written into any external representation of the plan. - Overrides *mocking.Overrides - // Timestamp is the record of truth for when the plan happened. Timestamp time.Time } diff --git a/internal/providers/mock.go b/internal/providers/mock.go deleted file mode 100644 index f955c1b51b58..000000000000 --- a/internal/providers/mock.go +++ /dev/null @@ -1,231 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package providers - -import ( - "fmt" - - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/moduletest/mocking" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -var _ Interface = (*Mock)(nil) - -// Mock is a mock provider that can be used by Terraform authors during test -// executions. -// -// The mock provider wraps an instance of an actual provider so it can return -// the correct schema and validate the configuration accurately. But, it -// intercepts calls to create resources or read data sources and instead reads -// and write the data to/from the state directly instead of needing to -// communicate with actual cloud providers. -// -// Callers can also specify the configs.MockData field to provide some preset -// data to return for any computed fields within the provider schema. The -// provider will make up random / junk data for any computed fields for which -// preset data is not available. -type Mock struct { - Provider Interface - Data *configs.MockData - - schema *GetProviderSchemaResponse -} - -func (m *Mock) GetProviderSchema() GetProviderSchemaResponse { - if m.schema == nil { - // Cache the schema, it's not changing. - schema := m.Provider.GetProviderSchema() - m.schema = &schema - } - return *m.schema -} - -func (m *Mock) ValidateProviderConfig(request ValidateProviderConfigRequest) (response ValidateProviderConfigResponse) { - // The config for the mocked providers is consistent, and validated when we - // parse the HCL directly. So we'll just make no change here. - return ValidateProviderConfigResponse{ - PreparedConfig: request.Config, - } -} - -func (m *Mock) ValidateResourceConfig(request ValidateResourceConfigRequest) ValidateResourceConfigResponse { - // We'll just pass this through to the underlying provider. The mock should - // support the same resource syntax as the original provider. - return m.Provider.ValidateResourceConfig(request) -} - -func (m *Mock) ValidateDataResourceConfig(request ValidateDataResourceConfigRequest) ValidateDataResourceConfigResponse { - // We'll just pass this through to the underlying provider. The mock should - // support the same data source syntax as the original provider. - return m.Provider.ValidateDataResourceConfig(request) -} - -func (m *Mock) UpgradeResourceState(request UpgradeResourceStateRequest) UpgradeResourceStateResponse { - // It's unlikely this will ever be called on a mocked provider, given they - // can only execute from inside tests. But we don't need to anything special - // here, let's just have the original provider handle it. - return m.Provider.UpgradeResourceState(request) -} - -func (m *Mock) ConfigureProvider(request ConfigureProviderRequest) (response ConfigureProviderResponse) { - // Do nothing here, we don't have anything to configure within the mocked - // providers and we don't want to call the original providers from here as - // they may try to talk to their underlying cloud providers. - return response -} - -func (m *Mock) Stop() error { - // Just stop the original resource. - return m.Provider.Stop() -} - -func (m *Mock) ReadResource(request ReadResourceRequest) ReadResourceResponse { - // For a mocked provider, reading a resource is just reading it from the - // state. So we'll return what we have. - // TODO(liamcervante): Can we do more than just say the state of resources - // never changes? What if we recomputed the values, so we can have drift - // if the value in the mocked data has changed? - return ReadResourceResponse{ - NewState: request.PriorState, - } -} - -func (m *Mock) PlanResourceChange(request PlanResourceChangeRequest) PlanResourceChangeResponse { - if request.ProposedNewState.IsNull() { - // Then we are deleting this resource - we don't need to do anything. - return PlanResourceChangeResponse{ - PlannedState: request.ProposedNewState, - PlannedPrivate: []byte("destroy"), - } - } - - if request.PriorState.IsNull() { - // Then we are creating this resource - we need to populate the computed - // null fields with unknowns so Terraform will render them properly. - - var response PlanResourceChangeResponse - - schema := m.GetProviderSchema() - response.Diagnostics = response.Diagnostics.Append(schema.Diagnostics) - if schema.Diagnostics.HasErrors() { - // We couldn't retrieve the schema for some reason, so the mock - // provider can't really function. - return response - } - - resource, exists := schema.ResourceTypes[request.TypeName] - if !exists { - // This means something has gone wrong much earlier, we should have - // failed a validation somewhere if a resource type doesn't exist. - panic(fmt.Errorf("failed to retrieve schema for resource %s", request.TypeName)) - } - - value, diags := mocking.PlanComputedValuesForResource(request.ProposedNewState, resource.Block) - response.Diagnostics = response.Diagnostics.Append(diags) - response.PlannedState = value - response.PlannedPrivate = []byte("create") - return response - } - - // Otherwise, we're just doing a simple update and we don't need to populate - // any values for that. - return PlanResourceChangeResponse{ - PlannedState: request.ProposedNewState, - PlannedPrivate: []byte("update"), - } -} - -func (m *Mock) ApplyResourceChange(request ApplyResourceChangeRequest) ApplyResourceChangeResponse { - switch string(request.PlannedPrivate) { - case "create": - // A new resource that we've created might have computed fields we need - // to populate. - - var response ApplyResourceChangeResponse - - schema := m.GetProviderSchema() - response.Diagnostics = response.Diagnostics.Append(schema.Diagnostics) - if schema.Diagnostics.HasErrors() { - // We couldn't retrieve the schema for some reason, so the mock - // provider can't really function. - return response - } - - resource, exists := schema.ResourceTypes[request.TypeName] - if !exists { - // This means something has gone wrong much earlier, we should have - // failed a validation somewhere if a resource type doesn't exist. - panic(fmt.Errorf("failed to retrieve schema for resource %s", request.TypeName)) - } - - replacement := mocking.ReplacementValue{ - Value: cty.NilVal, // If we have no data then we use cty.NilVal. - } - if mockedResource, exists := m.Data.MockResources[request.TypeName]; exists { - replacement.Value = mockedResource.Defaults - replacement.Range = mockedResource.DefaultsRange - } - - value, diags := mocking.ApplyComputedValuesForResource(request.PlannedState, replacement, resource.Block) - response.Diagnostics = response.Diagnostics.Append(diags) - response.NewState = value - return response - - default: - // For update or destroy operations, we don't have to create any values - // so we'll just return the planned state directly. - return ApplyResourceChangeResponse{ - NewState: request.PlannedState, - } - } -} - -func (m *Mock) ImportResourceState(request ImportResourceStateRequest) (response ImportResourceStateResponse) { - // Given mock providers only execute from within the test framework and it - // doesn't make a lot of sense why someone would want to import something - // during a test, we just don't support this at the moment. - // TODO(liamcervante): Find use cases for this? The existing syntax for - // mocks does make this possible but let's find a reason to do it first. - response.Diagnostics = response.Diagnostics.Append(tfdiags.Sourceless(tfdiags.Error, "Invalid import request", "Cannot import resources from mock providers.")) - return response -} - -func (m *Mock) ReadDataSource(request ReadDataSourceRequest) ReadDataSourceResponse { - var response ReadDataSourceResponse - - schema := m.GetProviderSchema() - response.Diagnostics = response.Diagnostics.Append(schema.Diagnostics) - if schema.Diagnostics.HasErrors() { - // We couldn't retrieve the schema for some reason, so the mock - // provider can't really function. - return response - } - - datasource, exists := schema.DataSources[request.TypeName] - if !exists { - // This means something has gone wrong much earlier, we should have - // failed a validation somewhere if a data source type doesn't exist. - panic(fmt.Errorf("failed to retrieve schema for data source %s", request.TypeName)) - } - - mockedData := mocking.ReplacementValue{ - Value: cty.NilVal, // If we have no mocked data we use cty.NilVal. - } - if mockedDataSource, exists := m.Data.MockDataSources[request.TypeName]; exists { - mockedData.Value = mockedDataSource.Defaults - mockedData.Range = mockedDataSource.DefaultsRange - } - - value, diags := mocking.ComputedValuesForDataSource(request.Config, mockedData, datasource.Block) - response.Diagnostics = response.Diagnostics.Append(diags) - response.State = value - return response -} - -func (m *Mock) Close() error { - return m.Provider.Close() -} diff --git a/internal/releaseauth/all_test.go b/internal/releaseauth/all_test.go index 59baa94c642e..4daab7c4105b 100644 --- a/internal/releaseauth/all_test.go +++ b/internal/releaseauth/all_test.go @@ -26,11 +26,16 @@ func TestAll(t *testing.T) { if err != nil { t.Fatal(err) } + checksums, err := ParseChecksums(sums) + if err != nil { + t.Fatal(err) + } sigAuth := NewSignatureAuthentication(signature, sums) sigAuth.PublicKey = string(publicKey) all := AllAuthenticators( + NewMatchingChecksumsAuthentication(actualChecksum, "sample_0.1.0_darwin_amd64.zip", checksums), NewChecksumAuthentication(actualChecksum, "testdata/sample_release/sample_0.1.0_darwin_amd64.zip"), sigAuth, ) diff --git a/internal/releaseauth/hash_test.go b/internal/releaseauth/hash_test.go deleted file mode 100644 index fe53baf41884..000000000000 --- a/internal/releaseauth/hash_test.go +++ /dev/null @@ -1,69 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package releaseauth - -import ( - "strings" - "testing" -) - -func Test_ParseChecksums(t *testing.T) { - sample := `bb611bb4c082fec9943d3c315fcd7cacd7dabce43fbf79b8e6b451bb4e54096d terraform-cloudplugin_0.1.0-prototype_darwin_amd64.zip -295bf15c2af01d18ce7832d6d357667119e4b14eb8fd2454d506b23ed7825652 terraform-cloudplugin_0.1.0-prototype_darwin_arm64.zip -b0744b9c8c0eb7ea61824728c302d0fd4fda4bb841fb6b3e701ef9eb10adbc39 terraform-cloudplugin_0.1.0-prototype_freebsd_386.zip -8fc967d1402c5106fb0ca1b084b7edd2b11fd8d7c2225f5cd05584a56e0b2a16 terraform-cloudplugin_0.1.0-prototype_freebsd_amd64.zip -2f35b2fc748b6f279b067a4eefd65264f811a2ae86a969461851dae546aa402d terraform-cloudplugin_0.1.0-prototype_linux_386.zip -c877c8cebf76209c2c7d427d31e328212cd4716fdd8b6677939fd2a01e06a2d0 terraform-cloudplugin_0.1.0-prototype_linux_amd64.zip -97ff8fe4e2e853c9ea54605305732e5b16437045230a2df21f410e36edcfe7bd terraform-cloudplugin_0.1.0-prototype_linux_arm.zip -d415a1c39b9ec79bd00efe72d0bf14e557833b6c1ce9898f223a7dd22abd0241 terraform-cloudplugin_0.1.0-prototype_linux_arm64.zip -0f33a13eca612d1b3cda959d655a1535d69bcc1195dee37407c667c12c4900b5 terraform-cloudplugin_0.1.0-prototype_solaris_amd64.zip -a6d572e5064e1b1cf8b0b4e64bc058dc630313c95e975b44e0540f231655d31c terraform-cloudplugin_0.1.0-prototype_windows_386.zip -2aaceed12ebdf25d21f9953a09c328bd8892f5a5bd5382bd502f054478f56998 terraform-cloudplugin_0.1.0-prototype_windows_amd64.zip -` - - sums, err := ParseChecksums([]byte(sample)) - if err != nil { - t.Fatalf("Expected no error, got: %s", err) - } - - expectedSum, err := SHA256FromHex("2f35b2fc748b6f279b067a4eefd65264f811a2ae86a969461851dae546aa402d") - if err != nil { - t.Fatalf("Expected no error, got: %s", err) - } - - if found := sums["terraform-cloudplugin_0.1.0-prototype_linux_386.zip"]; found != expectedSum { - t.Errorf("Expected %q, got %q", expectedSum, found) - } -} - -func Test_ParseChecksums_Empty(t *testing.T) { - sample := ` -` - - sums, err := ParseChecksums([]byte(sample)) - if err != nil { - t.Fatalf("Expected no error, got: %s", err) - } - - expectedSum, err := SHA256FromHex("2f35b2fc748b6f279b067a4eefd65264f811a2ae86a969461851dae546aa402d") - if err != nil { - t.Fatalf("Expected no error, got: %s", err) - } - - err = sums.Validate("terraform-cloudplugin_0.1.0-prototype_linux_arm.zip", expectedSum) - if err == nil || !strings.Contains(err.Error(), "no checksum found for filename") { - t.Errorf("Expected error %q, got nil", "no checksum found for filename") - } -} - -func Test_ParseChecksums_BadFormat(t *testing.T) { - sample := `xxxxxxxxxxxxxxxxxxxxxx terraform-cloudplugin_0.1.0-prototype_darwin_amd64.zip - zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz terraform-cloudplugin_0.1.0-prototype_darwin_arm64.zip -` - - _, err := ParseChecksums([]byte(sample)) - if err == nil || !strings.Contains(err.Error(), "failed to parse checksums") { - t.Fatalf("Expected error %q, got: %s", "failed to parse checksums", err) - } -} diff --git a/internal/releaseauth/matching_sums.go b/internal/releaseauth/matching_sums.go new file mode 100644 index 000000000000..6201269e3400 --- /dev/null +++ b/internal/releaseauth/matching_sums.go @@ -0,0 +1,55 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package releaseauth + +import "fmt" + +// ErrChecksumMismatch is the error returned when a reported checksum does not match +// what is stored in a SHA256SUMS file +type ErrChecksumMismatch struct { + Inner error +} + +func (e ErrChecksumMismatch) Error() string { + return fmt.Sprintf("failed to authenticate that release checksum matches checksum provided by the manifest: %v", e.Inner) +} + +func (e ErrChecksumMismatch) Unwrap() error { + return e.Inner +} + +// MatchingChecksumsAuthentication is an archive Authenticator that checks if a reported checksum +// matches the checksum that was stored in a SHA256SUMS file +type MatchingChecksumsAuthentication struct { + Authenticator + + expected SHA256Hash + sums SHA256Checksums + baseName string +} + +var _ Authenticator = MatchingChecksumsAuthentication{} + +// NewMatchingChecksumsAuthentication creates the Authenticator given an expected hash, +// the parsed SHA256SUMS data, and a filename. +func NewMatchingChecksumsAuthentication(expected SHA256Hash, baseName string, sums SHA256Checksums) *MatchingChecksumsAuthentication { + return &MatchingChecksumsAuthentication{ + expected: expected, + sums: sums, + baseName: baseName, + } +} + +// Authenticate ensures that the given hash matches what is found in the SHA256SUMS file +// for the corresponding filename +func (a MatchingChecksumsAuthentication) Authenticate() error { + err := a.sums.Validate(a.baseName, a.expected) + if err != nil { + return ErrChecksumMismatch{ + Inner: err, + } + } + + return nil +} diff --git a/internal/releaseauth/signature.go b/internal/releaseauth/signature.go index 97ffc16d5467..d70071a4066a 100644 --- a/internal/releaseauth/signature.go +++ b/internal/releaseauth/signature.go @@ -36,7 +36,7 @@ func NewSignatureAuthentication(signature []byte, signed []byte) *SignatureAuthe return &SignatureAuthentication{ signature: signature, signed: signed, - PublicKey: HashiCorpPublicKey, + PublicKey: HashicorpPublicKey, } } @@ -59,8 +59,7 @@ func (a SignatureAuthentication) Authenticate() error { // HashicorpPublicKey is the HashiCorp public key, also available at // https://www.hashicorp.com/security -const HashiCorpPublicKeyID = "72D7468F" -const HashiCorpPublicKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- +const HashicorpPublicKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBGB9+xkBEACabYZOWKmgZsHTdRDiyPJxhbuUiKX65GUWkyRMJKi/1dviVxOX PG6hBPtF48IFnVgxKpIb7G6NjBousAV+CuLlv5yqFKpOZEGC6sBV+Gx8Vu1CICpl diff --git a/internal/states/statefile/version4.go b/internal/states/statefile/version4.go index 52939e8cb896..f4f9b528d17f 100644 --- a/internal/states/statefile/version4.go +++ b/internal/states/statefile/version4.go @@ -589,6 +589,19 @@ func encodeCheckResultsV4(in *states.CheckResults) []checkResultsV4 { ret := make([]checkResultsV4, 0, in.ConfigResults.Len()) for _, configElem := range in.ConfigResults.Elems { + + if configElem.Key.CheckableKind() == addrs.CheckableInputVariable { + // For the remainder of the v1.6 series we should not output + // checkable input variables into the state file. + // + // This is to work around the issues in earlier releases that was + // fixed by https://github.com/hashicorp/terraform/pull/33813. + // + // The aim here is to give people more time to upgrade before we + // make the latest version incompatible with earlier versions. + continue + } + configResultsOut := checkResultsV4{ ObjectKind: encodeCheckableObjectKindV4(configElem.Key.CheckableKind()), ConfigAddr: configElem.Key.String(), diff --git a/internal/terraform/context_apply.go b/internal/terraform/context_apply.go index 5baa19a7c725..68c3c5460ae4 100644 --- a/internal/terraform/context_apply.go +++ b/internal/terraform/context_apply.go @@ -62,7 +62,6 @@ func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State Config: config, InputState: workingState, Changes: plan.Changes, - Overrides: plan.Overrides, // We need to propagate the check results from the plan phase, // because that will tell us which checkable objects we're expecting diff --git a/internal/terraform/context_apply2_test.go b/internal/terraform/context_apply2_test.go index fcd0dab5f037..11db793140e7 100644 --- a/internal/terraform/context_apply2_test.go +++ b/internal/terraform/context_apply2_test.go @@ -18,7 +18,6 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/checks" - "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/lang/marks" "github.com/hashicorp/terraform/internal/plans" @@ -2325,134 +2324,3 @@ locals { t.Errorf("expected local value to be \"foo\" but was \"%s\"", module.LocalValues["local_value"].AsString()) } } - -func TestContext2Apply_mockProvider(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -provider "test" {} - -data "test_object" "foo" {} - -resource "test_object" "foo" { - value = data.test_object.foo.output -} -`, - }) - - // Manually mark the provider config as being mocked. - m.Module.ProviderConfigs["test"].Mock = true - m.Module.ProviderConfigs["test"].MockData = &configs.MockData{ - MockDataSources: map[string]*configs.MockResource{ - "test_object": { - Mode: addrs.DataResourceMode, - Type: "test_object", - Defaults: cty.ObjectVal(map[string]cty.Value{ - "output": cty.StringVal("expected data output"), - }), - }, - }, - MockResources: map[string]*configs.MockResource{ - "test_object": { - Mode: addrs.ManagedResourceMode, - Type: "test_object", - Defaults: cty.ObjectVal(map[string]cty.Value{ - "output": cty.StringVal("expected resource output"), - }), - }, - }, - } - - testProvider := &MockProvider{ - GetProviderSchemaResponse: &providers.GetProviderSchemaResponse{ - ResourceTypes: map[string]providers.Schema{ - "test_object": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "value": { - Type: cty.String, - Required: true, - }, - "output": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - }, - DataSources: map[string]providers.Schema{ - "test_object": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "output": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - }, - }, - } - - reachedReadDataSourceFn := false - reachedPlanResourceChangeFn := false - reachedApplyResourceChangeFn := false - testProvider.ReadDataSourceFn = func(request providers.ReadDataSourceRequest) (resp providers.ReadDataSourceResponse) { - reachedReadDataSourceFn = true - cfg := request.Config.AsValueMap() - cfg["output"] = cty.StringVal("unexpected data output") - resp.State = cty.ObjectVal(cfg) - return resp - } - testProvider.PlanResourceChangeFn = func(request providers.PlanResourceChangeRequest) (resp providers.PlanResourceChangeResponse) { - reachedPlanResourceChangeFn = true - cfg := request.Config.AsValueMap() - cfg["output"] = cty.UnknownVal(cty.String) - resp.PlannedState = cty.ObjectVal(cfg) - return resp - } - testProvider.ApplyResourceChangeFn = func(request providers.ApplyResourceChangeRequest) (resp providers.ApplyResourceChangeResponse) { - reachedApplyResourceChangeFn = true - cfg := request.Config.AsValueMap() - cfg["output"] = cty.StringVal("unexpected resource output") - resp.NewState = cty.ObjectVal(cfg) - return resp - } - - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(testProvider), - }, - }) - - plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - }) - if diags.HasErrors() { - t.Fatalf("expected no errors, but got %s", diags) - } - - state, diags := ctx.Apply(plan, m) - if diags.HasErrors() { - t.Fatalf("expected no errors, but got %s", diags) - } - - // Check we never made it to the actual provider. - if reachedReadDataSourceFn { - t.Errorf("read the data source in the provider when it should have been mocked") - } - if reachedPlanResourceChangeFn { - t.Errorf("planned the resource in the provider when it should have been mocked") - } - if reachedApplyResourceChangeFn { - t.Errorf("applied the resource in the provider when it should have been mocked") - } - - // Check we got the right data back from our mocked provider. - instance := state.ResourceInstance(mustResourceInstanceAddr("test_object.foo")) - expected := "{\"output\":\"expected resource output\",\"value\":\"expected data output\"}" - if diff := cmp.Diff(string(instance.Current.AttrsJSON), expected); len(diff) > 0 { - t.Errorf("expected:\n%s\nactual:\n%s\ndiff:\n%s", expected, string(instance.Current.AttrsJSON), diff) - } -} diff --git a/internal/terraform/context_apply_overrides_test.go b/internal/terraform/context_apply_overrides_test.go deleted file mode 100644 index 217e1f7b7169..000000000000 --- a/internal/terraform/context_apply_overrides_test.go +++ /dev/null @@ -1,614 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package terraform - -import ( - "testing" - - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/moduletest/mocking" - "github.com/hashicorp/terraform/internal/plans" - "github.com/hashicorp/terraform/internal/providers" - "github.com/hashicorp/terraform/internal/states" -) - -// This file contains 'integration' tests for the Terraform test overrides -// functionality. -// -// These tests could live in context_apply_test or context_apply2_test but given -// the size of those files, it makes sense to keep these tests grouped together. - -func TestContextOverrides(t *testing.T) { - - // The approach to the testing here, is to create some configuration that - // would panic if executed normally because of the underlying provider. - // - // We then write overrides that make sure the underlying provider is never - // called. - // - // We then run a plan, apply, refresh, destroy sequence that tests all the - // potential function calls to the underlying provider to make sure we - // have covered everything. - // - // Finally, we validate some expected values after the apply stage to make - // sure the overrides are returning the values we want them to. - - tcs := map[string]struct { - configs map[string]string - overrides *mocking.Overrides - outputs cty.Value - }{ - "resource": { - configs: map[string]string{ - "main.tf": ` -resource "test_instance" "instance" { - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -}`, - }, - overrides: mocking.OverridesForTesting(nil, func(overrides addrs.Map[addrs.Targetable, *configs.Override]) { - overrides.Put(mustResourceInstanceAddr("test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - }), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "resource_from_provider": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -resource "test_instance" "instance" { - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -}`, - }, - overrides: mocking.OverridesForTesting(func(overrides map[string]addrs.Map[addrs.Targetable, *configs.Override]) { - overrides["test"] = addrs.MakeMap[addrs.Targetable, *configs.Override]() - overrides["test"].Put(mustResourceInstanceAddr("test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - }, nil), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "selectively_applies_provider": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -provider "test" { - alias = "secondary" -} - -resource "test_instance" "primary" { - value = "primary" -} - -resource "test_instance" "secondary" { - provider = test.secondary - value = "secondary" -} - -output "primary_value" { - value = test_instance.primary.value -} - -output "primary_id" { - value = test_instance.primary.id -} - -output "secondary_value" { - value = test_instance.secondary.value -} - -output "secondary_id" { - value = test_instance.secondary.id -}`, - }, - overrides: mocking.OverridesForTesting(func(overrides map[string]addrs.Map[addrs.Targetable, *configs.Override]) { - overrides["test.secondary"] = addrs.MakeMap[addrs.Targetable, *configs.Override]() - // Test should not apply this override, as this provider is - // not being used for this resource. - overrides["test.secondary"].Put(mustResourceInstanceAddr("test_instance.primary"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("primary_id"), - }), - }) - overrides["test.secondary"].Put(mustResourceInstanceAddr("test_instance.secondary"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("secondary_id"), - }), - }) - }, func(overrides addrs.Map[addrs.Targetable, *configs.Override]) { - overrides.Put(mustResourceInstanceAddr("test_instance.primary"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - }), - outputs: cty.ObjectVal(map[string]cty.Value{ - "primary_id": cty.StringVal("h3ll0"), - "primary_value": cty.StringVal("primary"), - "secondary_id": cty.StringVal("secondary_id"), - "secondary_value": cty.StringVal("secondary"), - }), - }, - "propagates_provider_to_modules_explicit": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -module "mod" { - source = "./mod" - - providers = { - test = test - } -} - -output "value" { - value = module.mod.value -} - -output "id" { - value = module.mod.id -}`, - "mod/main.tf": ` -provider "test" {} - -resource "test_instance" "instance" { - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -} -`, - }, - overrides: mocking.OverridesForTesting(func(overrides map[string]addrs.Map[addrs.Targetable, *configs.Override]) { - overrides["test"] = addrs.MakeMap[addrs.Targetable, *configs.Override]() - overrides["test"].Put(mustResourceInstanceAddr("module.mod.test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - }, nil), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "propagates_provider_to_modules_implicit": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -module "mod" { - source = "./mod" -} - -output "value" { - value = module.mod.value -} - -output "id" { - value = module.mod.id -}`, - "mod/main.tf": ` -resource "test_instance" "instance" { - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -} - -`, - }, - overrides: mocking.OverridesForTesting(func(overrides map[string]addrs.Map[addrs.Targetable, *configs.Override]) { - overrides["test"] = addrs.MakeMap[addrs.Targetable, *configs.Override]() - overrides["test"].Put(mustResourceInstanceAddr("module.mod.test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - }, nil), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "data_source": { - configs: map[string]string{ - "main.tf": ` -data "test_instance" "instance" { - id = "data-source" -} - -resource "test_instance" "instance" { - value = data.test_instance.instance.value -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -}`, - }, - overrides: mocking.OverridesForTesting(nil, func(overrides addrs.Map[addrs.Targetable, *configs.Override]) { - overrides.Put(mustResourceInstanceAddr("test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - overrides.Put(mustResourceInstanceAddr("data.test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "value": cty.StringVal("Hello, world!"), - }), - }) - }), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "module": { - configs: map[string]string{ - "main.tf": ` -module "mod" { - source = "./mod" -} - -output "value" { - value = module.mod.value -} - -output "id" { - value = module.mod.id -}`, - "mod/main.tf": ` -resource "test_instance" "instance" { - value = "random" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -} - -check "value" { - assert { - condition = test_instance.instance.value == "definitely wrong" - error_message = "bad value" - } -} -`, - }, - overrides: mocking.OverridesForTesting(nil, func(overrides addrs.Map[addrs.Targetable, *configs.Override]) { - overrides.Put(mustModuleInstance("module.mod"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }) - }), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "provider_type_override": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -module "mod" { - source = "./mod" -} - -output "value" { - value = module.mod.value -} - -output "id" { - value = module.mod.id -}`, - "mod/main.tf": ` -terraform { - required_providers { - replaced = { - source = "hashicorp/test" - } - } -} - -resource "test_instance" "instance" { - provider = replaced - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -} - -`, - }, - overrides: mocking.OverridesForTesting(func(overrides map[string]addrs.Map[addrs.Targetable, *configs.Override]) { - overrides["test"] = addrs.MakeMap[addrs.Targetable, *configs.Override]() - overrides["test"].Put(mustResourceInstanceAddr("module.mod.test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - }), - }) - }, nil), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("h3ll0"), - "value": cty.StringVal("Hello, world!"), - }), - }, - "resource_instance_overrides": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -resource "test_instance" "instance" { - count = 3 - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.*.value -} - -output "id" { - value = test_instance.instance.*.id -}`, - }, - overrides: mocking.OverridesForTesting(nil, func(overrides addrs.Map[addrs.Targetable, *configs.Override]) { - overrides.Put(mustAbsResourceAddr("test_instance.instance"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("generic"), - }), - }) - overrides.Put(mustResourceInstanceAddr("test_instance.instance[1]"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("specific"), - }), - }) - }), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.TupleVal([]cty.Value{ - cty.StringVal("generic"), - cty.StringVal("specific"), - cty.StringVal("generic"), - }), - "value": cty.TupleVal([]cty.Value{ - cty.StringVal("Hello, world!"), - cty.StringVal("Hello, world!"), - cty.StringVal("Hello, world!"), - }), - }), - }, - "module_instance_overrides": { - configs: map[string]string{ - "main.tf": ` -provider "test" {} - -module "mod" { - count = 3 - source = "./mod" -} - -output "value" { - value = module.mod.*.value -} - -output "id" { - value = module.mod.*.id -}`, - "mod/main.tf": ` -terraform { - required_providers { - replaced = { - source = "hashicorp/test" - } - } -} - -resource "test_instance" "instance" { - provider = replaced - value = "Hello, world!" -} - -output "value" { - value = test_instance.instance.value -} - -output "id" { - value = test_instance.instance.id -} - -`, - }, - overrides: mocking.OverridesForTesting(nil, func(overrides addrs.Map[addrs.Targetable, *configs.Override]) { - overrides.Put(mustModuleInstance("module.mod"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("generic"), - "value": cty.StringVal("Hello, world!"), - }), - }) - overrides.Put(mustModuleInstance("module.mod[1]"), &configs.Override{ - Values: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("specific"), - "value": cty.StringVal("Hello, world!"), - }), - }) - }), - outputs: cty.ObjectVal(map[string]cty.Value{ - "id": cty.TupleVal([]cty.Value{ - cty.StringVal("generic"), - cty.StringVal("specific"), - cty.StringVal("generic"), - }), - "value": cty.TupleVal([]cty.Value{ - cty.StringVal("Hello, world!"), - cty.StringVal("Hello, world!"), - cty.StringVal("Hello, world!"), - }), - }), - }, - } - for name, tc := range tcs { - t.Run(name, func(t *testing.T) { - cfg := testModuleInline(t, tc.configs) - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(underlyingOverridesProvider), - }, - }) - - plan, diags := ctx.Plan(cfg, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - Overrides: tc.overrides, - }) - if diags.HasErrors() { - t.Fatal(diags.Err()) - } - - state, diags := ctx.Apply(plan, cfg) - if diags.HasErrors() { - t.Fatal(diags.Err()) - } - - outputs := make(map[string]cty.Value, len(cfg.Module.Outputs)) - for _, output := range cfg.Module.Outputs { - outputs[output.Name] = state.OutputValue(output.Addr().Absolute(addrs.RootModuleInstance)).Value - } - actual := cty.ObjectVal(outputs) - - if !actual.RawEquals(tc.outputs) { - t.Fatalf("expected:\n%s\nactual:\n%s", tc.outputs.GoString(), actual.GoString()) - } - - _, diags = ctx.Plan(cfg, state, &PlanOpts{ - Mode: plans.RefreshOnlyMode, - Overrides: tc.overrides, - }) - if diags.HasErrors() { - t.Fatal(diags.Err()) - } - - destroyPlan, diags := ctx.Plan(cfg, state, &PlanOpts{ - Mode: plans.DestroyMode, - Overrides: tc.overrides, - }) - if diags.HasErrors() { - t.Fatal(diags.Err()) - } - - _, diags = ctx.Apply(destroyPlan, cfg) - if diags.HasErrors() { - t.Fatal(diags.Err()) - } - }) - } - -} - -// underlyingOverridesProvider returns a provider that always panics for -// important calls. This is to validate the behaviour of the overrides -// functionality, in that they should stop the provider from being executed. -var underlyingOverridesProvider = &MockProvider{ - GetProviderSchemaResponse: &providers.GetProviderSchemaResponse{ - ResourceTypes: map[string]providers.Schema{ - "test_instance": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - "value": { - Type: cty.String, - Required: true, - }, - }, - }, - }, - }, - DataSources: map[string]providers.Schema{ - "test_instance": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Required: true, - }, - "value": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - }, - }, - ReadResourceFn: func(request providers.ReadResourceRequest) providers.ReadResourceResponse { - panic("ReadResourceFn called, should have been overridden.") - }, - PlanResourceChangeFn: func(request providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse { - panic("PlanResourceChangeFn called, should have been overridden.") - }, - ApplyResourceChangeFn: func(request providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse { - panic("ApplyResourceChangeFn called, should have been overridden.") - }, - ReadDataSourceFn: func(request providers.ReadDataSourceRequest) providers.ReadDataSourceResponse { - panic("ReadDataSourceFn called, should have been overridden.") - }, -} diff --git a/internal/terraform/context_import.go b/internal/terraform/context_import.go index 0b8dbf839f30..09741e5c04e3 100644 --- a/internal/terraform/context_import.go +++ b/internal/terraform/context_import.go @@ -6,6 +6,7 @@ package terraform import ( "log" + "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/states" @@ -29,14 +30,12 @@ type ImportTarget struct { // if the import did not originate in config. Config *configs.Import - // LegacyAddr is the import address set from the command line arguments - // when using the import command. - LegacyAddr addrs.AbsResourceInstance + // Addr is the address for the resource instance that the new object should + // be imported into. + Addr addrs.AbsResourceInstance - // IDString stores the evaluated ID from the Config for the import process. - // This is also used by the legacy import command to directly set the ID - // given from the CLI. - IDString string + // ID is the ID of the resource to import. This is resource-specific. + ID hcl.Expression } // Import takes already-created external resources and brings them diff --git a/internal/terraform/context_import_test.go b/internal/terraform/context_import_test.go index d208c84d3bca..6625c1bca709 100644 --- a/internal/terraform/context_import_test.go +++ b/internal/terraform/context_import_test.go @@ -11,7 +11,10 @@ import ( "github.com/google/go-cmp/cmp" "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hcltest" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" @@ -37,13 +40,14 @@ func TestContextImport_basic(t *testing.T) { }, } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -88,13 +92,14 @@ resource "aws_instance" "foo" { }, } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.IntKey(0), ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -149,13 +154,14 @@ func TestContextImport_collision(t *testing.T) { }, } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, state, &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -193,13 +199,14 @@ func TestContextImport_missingType(t *testing.T) { }, }) + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -244,13 +251,14 @@ func TestContextImport_moduleProvider(t *testing.T) { }, }) + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -299,13 +307,14 @@ func TestContextImport_providerModule(t *testing.T) { return } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) _, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( + Addr: addrs.RootModuleInstance.Child("child", addrs.NoKey).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -355,13 +364,14 @@ func TestContextImport_providerConfig(t *testing.T) { }, } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, SetVariables: InputValues{ @@ -415,13 +425,14 @@ func TestContextImport_providerConfigResources(t *testing.T) { }, } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) _, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -486,13 +497,14 @@ data "aws_data_source" "bar" { }), } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -537,13 +549,14 @@ func TestContextImport_refreshNil(t *testing.T) { } } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -578,13 +591,14 @@ func TestContextImport_module(t *testing.T) { }, } + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( + Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -619,13 +633,14 @@ func TestContextImport_moduleDepth2(t *testing.T) { }, } + bazExpr := hcl.StaticExpr(cty.StringVal("baz"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).Child("nested", addrs.NoKey).ResourceInstance( + Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).Child("nested", addrs.NoKey).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "baz", + ID: bazExpr, }, }, }) @@ -660,13 +675,14 @@ func TestContextImport_moduleDiff(t *testing.T) { }, } + bazExpr := hcl.StaticExpr(cty.StringVal("baz"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( + Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "baz", + ID: bazExpr, }, }, }) @@ -728,13 +744,14 @@ func TestContextImport_multiState(t *testing.T) { }, }) + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -802,13 +819,14 @@ func TestContextImport_multiStateSame(t *testing.T) { }, }) + barExpr := hcl.StaticExpr(cty.StringVal("bar"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: barExpr, }, }, }) @@ -896,13 +914,14 @@ resource "test_resource" "unused" { }, }) + testExpr := hcl.StaticExpr(cty.StringVal("test"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "test_resource", "test", addrs.NoKey, ), - IDString: "test", + ID: testExpr, }, }, }) @@ -966,13 +985,14 @@ resource "test_resource" "test" { }, }) + testExpr := hcl.StaticExpr(cty.StringVal("test"), configs.SynthBody("import", nil).MissingItemRange()) state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "test_resource", "test", addrs.NoKey, ), - IDString: "test", + ID: testExpr, }, }, }) @@ -993,6 +1013,7 @@ resource "test_resource" "test" { func TestContextImport_33572(t *testing.T) { p := testProvider("aws") m := testModule(t, "issue-33572") + bar_expr := hcltest.MockExprLiteral(cty.StringVal("bar")) ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ @@ -1014,10 +1035,10 @@ func TestContextImport_33572(t *testing.T) { state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ { - LegacyAddr: addrs.RootModuleInstance.ResourceInstance( + Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), - IDString: "bar", + ID: bar_expr, }, }, }) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 7d50aa5bd195..32cf42438dce 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -17,7 +17,6 @@ import ( "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/lang/globalref" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" @@ -80,10 +79,6 @@ type PlanOpts struct { // the actual graph. ExternalReferences []*addrs.Reference - // Overrides provides a set of override objects that should be applied - // during this plan. - Overrides *mocking.Overrides - // ImportTargets is a list of target resources to import. These resources // will be added to the plan graph. ImportTargets []*ImportTarget @@ -529,14 +524,45 @@ func (c *Context) postPlanValidateMoves(config *configs.Config, stmts []refactor return refactoring.ValidateMoves(stmts, config, allInsts) } +// All import target addresses with a key must already exist in config. +// When we are able to generate config for expanded resources, this rule can be +// relaxed. +func (c *Context) postPlanValidateImports(config *configs.Config, importTargets []*ImportTarget, allInst instances.Set) tfdiags.Diagnostics { + var diags tfdiags.Diagnostics + for _, it := range importTargets { + // We only care about import target addresses that have a key. + // If the address does not have a key, we don't need it to be in config + // because are able to generate config. + if it.Addr.Resource.Key == nil { + continue + } + + if !allInst.HasResourceInstance(it.Addr) { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Cannot import to non-existent resource address", + fmt.Sprintf( + "Importing to resource address %s is not possible, because that address does not exist in configuration. Please ensure that the resource key is correct, or remove this import block.", + it.Addr, + ), + )) + } + } + return diags +} + // findImportTargets builds a list of import targets by taking the import blocks // in the config and filtering out any that target a resource already in state. func (c *Context) findImportTargets(config *configs.Config, priorState *states.State) []*ImportTarget { var importTargets []*ImportTarget for _, ic := range config.Module.Import { - importTargets = append(importTargets, &ImportTarget{ - Config: ic, - }) + if priorState.ResourceInstance(ic.To) == nil { + importTargets = append(importTargets, &ImportTarget{ + Addr: ic.To, + ID: ic.ID, + Config: ic, + }) + } } return importTargets } @@ -575,13 +601,17 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, o Changes: changes, MoveResults: moveResults, PlanTimeTimestamp: timestamp, - Overrides: opts.Overrides, }) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) allInsts := walker.InstanceExpander.AllInstances() + importValidateDiags := c.postPlanValidateImports(config, opts.ImportTargets, allInsts) + if importValidateDiags.HasErrors() { + return nil, importValidateDiags + } + moveValidateDiags := c.postPlanValidateMoves(config, moveStmts, allInsts) if moveValidateDiags.HasErrors() { // If any of the move statements are invalid then those errors take @@ -626,7 +656,6 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, o PriorState: priorState, PlannedState: walker.State.Close(), ExternalReferences: opts.ExternalReferences, - Overrides: opts.Overrides, Checks: states.NewCheckResults(walker.Checks), Timestamp: timestamp, diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 525c9b2b39d5..7706b7604f73 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -4202,6 +4202,982 @@ resource "test_object" "a" { } } +func TestContext2Plan_importResourceBasic(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "a" { + test_string = "foo" +} + +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + hook := new(MockHook) + ctx := testContext2(t, &ContextOpts{ + Hooks: []Hook{hook}, + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addr.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + if got, want := instPlan.Addr, addr; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + if instPlan.Importing.ID != "123" { + t.Errorf("expected import change from \"123\", got non-import change") + } + + if !hook.PrePlanImportCalled { + t.Fatalf("PostPlanImport hook not called") + } + if addr, wantAddr := hook.PrePlanImportAddr, instPlan.Addr; !addr.Equal(wantAddr) { + t.Errorf("expected addr to be %s, but was %s", wantAddr, addr) + } + + if !hook.PostPlanImportCalled { + t.Fatalf("PostPlanImport hook not called") + } + if addr, wantAddr := hook.PostPlanImportAddr, instPlan.Addr; !addr.Equal(wantAddr) { + t.Errorf("expected addr to be %s, but was %s", wantAddr, addr) + } + }) +} + +func TestContext2Plan_importResourceAlreadyInState(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "a" { + test_string = "foo" +} + +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.a").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"test_string":"foo"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), + ) + + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addr.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + if got, want := instPlan.Addr, addr; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + if instPlan.Importing != nil { + t.Errorf("expected non-import change, got import change %+v", instPlan.Importing) + } + }) +} + +func TestContext2Plan_importResourceUpdate(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "a" { + test_string = "bar" +} + +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addr.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + if got, want := instPlan.Addr, addr; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.Update; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + if instPlan.Importing.ID != "123" { + t.Errorf("expected import change from \"123\", got non-import change") + } + }) +} + +func TestContext2Plan_importResourceReplace(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "a" { + test_string = "bar" +} + +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + ForceReplace: []addrs.AbsResourceInstance{ + addr, + }, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addr.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + if got, want := instPlan.Addr, addr; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.DeleteThenCreate; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if instPlan.Importing.ID != "123" { + t.Errorf("expected import change from \"123\", got non-import change") + } + }) +} + +func TestContext2Plan_importRefreshOnce(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "a" { + test_string = "bar" +} + +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + readCalled := 0 + p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse { + readCalled++ + state, _ := simpleTestSchema().CoerceValue(cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + })) + + return providers.ReadResourceResponse{ + NewState: state, + } + } + + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + ForceReplace: []addrs.AbsResourceInstance{ + addr, + }, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + if readCalled > 1 { + t.Error("ReadResource called multiple times for import") + } +} + +func TestContext2Plan_importTargetWithKeyDoesNotExist(t *testing.T) { + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "a" { + count = 1 + test_string = "bar" +} + +import { + to = test_object.a[42] + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if !diags.HasErrors() { + t.Fatalf("expected error but got none") + } +} + +func TestContext2Plan_importIdVariable(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "import-id-variable") + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + }, + }) + + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "aws_instance", + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("foo"), + }), + }, + }, + } + + _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + SetVariables: InputValues{ + "the_id": &InputValue{ + // let var take its default value + Value: cty.NilVal, + }, + }, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors: %s", diags.Err()) + } +} + +func TestContext2Plan_importIdFunc(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "import-id-func") + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + }, + }) + + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "aws_instance", + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("foo"), + }), + }, + }, + } + + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { + t.Fatalf("unexpected errors: %s", diags.Err()) + } +} + +func TestContext2Plan_importIdDataSource(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "import-id-data-source") + + p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ + ResourceTypes: map[string]*configschema.Block{ + "aws_subnet": { + Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + DataSources: map[string]*configschema.Block{ + "aws_subnet": { + Attributes: map[string]*configschema.Attribute{ + "vpc_id": { + Type: cty.String, + Required: true, + }, + "cidr_block": { + Type: cty.String, + Computed: true, + }, + "id": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + }) + p.ReadDataSourceResponse = &providers.ReadDataSourceResponse{ + State: cty.ObjectVal(map[string]cty.Value{ + "vpc_id": cty.StringVal("abc"), + "cidr_block": cty.StringVal("10.0.1.0/24"), + "id": cty.StringVal("123"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "aws_subnet", + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("foo"), + }), + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + }, + }) + + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { + t.Fatalf("unexpected errors: %s", diags.Err()) + } +} + +func TestContext2Plan_importIdModule(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "import-id-module") + + p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ + ResourceTypes: map[string]*configschema.Block{ + "aws_lb": { + Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + }) + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "aws_lb", + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("foo"), + }), + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + }, + }) + + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { + t.Fatalf("unexpected errors: %s", diags.Err()) + } +} + +func TestContext2Plan_importIdInvalidNull(t *testing.T) { + p := testProvider("test") + m := testModule(t, "import-id-invalid-null") + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + SetVariables: InputValues{ + "the_id": &InputValue{ + Value: cty.NullVal(cty.String), + }, + }, + }) + if !diags.HasErrors() { + t.Fatal("succeeded; want errors") + } + if got, want := diags.Err().Error(), "The import ID cannot be null"; !strings.Contains(got, want) { + t.Fatalf("wrong error:\ngot: %s\nwant: message containing %q", got, want) + } +} + +func TestContext2Plan_importIdInvalidUnknown(t *testing.T) { + p := testProvider("test") + m := testModule(t, "import-id-invalid-unknown") + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ + ResourceTypes: map[string]*configschema.Block{ + "test_resource": { + Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + }) + p.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse { + return providers.PlanResourceChangeResponse{ + PlannedState: cty.UnknownVal(cty.Object(map[string]cty.Type{ + "id": cty.String, + })), + } + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_resource", + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("foo"), + }), + }, + }, + } + + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if !diags.HasErrors() { + t.Fatal("succeeded; want errors") + } + if got, want := diags.Err().Error(), `The import block "id" argument depends on resource attributes that cannot be determined until apply`; !strings.Contains(got, want) { + t.Fatalf("wrong error:\ngot: %s\nwant: message containing %q", got, want) + } +} + +func TestContext2Plan_importIntoModuleWithGeneratedConfig(t *testing.T) { + m := testModuleInline(t, map[string]string{ + "main.tf": ` +import { + to = test_object.a + id = "123" +} + +import { + to = module.mod.test_object.a + id = "456" +} + +module "mod" { + source = "./mod" +} +`, + "./mod/main.tf": ` +resource "test_object" "a" { + test_string = "bar" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + one := mustResourceInstanceAddr("test_object.a") + two := mustResourceInstanceAddr("module.mod.test_object.a") + + onePlan := plan.Changes.ResourceInstance(one) + twoPlan := plan.Changes.ResourceInstance(two) + + // This test is just to make sure things work e2e with modules and generated + // config, so we're not too careful about the actual responses - we're just + // happy nothing panicked. See the other import tests for actual validation + // of responses and the like. + if twoPlan.Action != plans.Update { + t.Errorf("expected nested item to be updated but was %s", twoPlan.Action) + } + + if len(onePlan.GeneratedConfig) == 0 { + t.Errorf("expected root item to generate config but it didn't") + } +} + +func TestContext2Plan_importResourceConfigGen(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addr.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + if got, want := instPlan.Addr, addr; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + if instPlan.Importing.ID != "123" { + t.Errorf("expected import change from \"123\", got non-import change") + } + + want := `resource "test_object" "a" { + test_bool = null + test_list = null + test_map = null + test_number = null + test_string = "foo" +}` + got := instPlan.GeneratedConfig + if diff := cmp.Diff(want, got); len(diff) > 0 { + t.Errorf("got:\n%s\nwant:\n%s\ndiff:\n%s", got, want, diff) + } + }) +} + +func TestContext2Plan_importResourceConfigGenWithAlias(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +provider "test" { + alias = "backup" +} + +import { + provider = test.backup + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addr.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + if got, want := instPlan.Addr, addr; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + if instPlan.Importing.ID != "123" { + t.Errorf("expected import change from \"123\", got non-import change") + } + + want := `resource "test_object" "a" { + provider = test.backup + test_bool = null + test_list = null + test_map = null + test_number = null + test_string = "foo" +}` + got := instPlan.GeneratedConfig + if diff := cmp.Diff(want, got); len(diff) > 0 { + t.Errorf("got:\n%s\nwant:\n%s\ndiff:\n%s", got, want, diff) + } + }) +} + +func TestContext2Plan_importResourceConfigGenExpandedResource(t *testing.T) { + m := testModuleInline(t, map[string]string{ + "main.tf": ` +import { + to = test_object.a[0] + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + GenerateConfigPath: "generated.tf", + }) + if !diags.HasErrors() { + t.Fatalf("expected plan to error, but it did not") + } +} + +// config generation still succeeds even when planning fails +func TestContext2Plan_importResourceConfigGenWithError(t *testing.T) { + addr := mustResourceInstanceAddr("test_object.a") + m := testModuleInline(t, map[string]string{ + "main.tf": ` +import { + to = test_object.a + id = "123" +} +`, + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + p.PlanResourceChangeResponse = &providers.PlanResourceChangeResponse{ + PlannedState: cty.NullVal(cty.DynamicPseudoType), + Diagnostics: tfdiags.Diagnostics(nil).Append(errors.New("plan failed")), + } + p.ReadResourceResponse = &providers.ReadResourceResponse{ + NewState: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + } + p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_object", + State: cty.ObjectVal(map[string]cty.Value{ + "test_string": cty.StringVal("foo"), + }), + }, + }, + } + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. + }) + if !diags.HasErrors() { + t.Fatal("expected error") + } + + instPlan := plan.Changes.ResourceInstance(addr) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addr) + } + + want := `resource "test_object" "a" { + test_bool = null + test_list = null + test_map = null + test_number = null + test_string = "foo" +}` + got := instPlan.GeneratedConfig + if diff := cmp.Diff(want, got); len(diff) > 0 { + t.Errorf("got:\n%s\nwant:\n%s\ndiff:\n%s", got, want, diff) + } +} + func TestContext2Plan_plannedState(t *testing.T) { addr := mustResourceInstanceAddr("test_object.a") m := testModuleInline(t, map[string]string{ diff --git a/internal/terraform/context_plan_import_test.go b/internal/terraform/context_plan_import_test.go deleted file mode 100644 index adf7ea0e1705..000000000000 --- a/internal/terraform/context_plan_import_test.go +++ /dev/null @@ -1,1383 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package terraform - -import ( - "errors" - "strings" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/plans" - "github.com/hashicorp/terraform/internal/providers" - "github.com/hashicorp/terraform/internal/states" - "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/zclconf/go-cty/cty" -) - -func TestContext2Plan_importResourceBasic(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "test_object" "a" { - test_string = "foo" -} - -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - hook := new(MockHook) - ctx := testContext2(t, &ContextOpts{ - Hooks: []Hook{hook}, - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - t.Run(addr.String(), func(t *testing.T) { - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - if got, want := instPlan.Addr, addr; !got.Equal(want) { - t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { - t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - if instPlan.Importing.ID != "123" { - t.Errorf("expected import change from \"123\", got non-import change") - } - - if !hook.PrePlanImportCalled { - t.Fatalf("PostPlanImport hook not called") - } - if addr, wantAddr := hook.PrePlanImportAddr, instPlan.Addr; !addr.Equal(wantAddr) { - t.Errorf("expected addr to be %s, but was %s", wantAddr, addr) - } - - if !hook.PostPlanImportCalled { - t.Fatalf("PostPlanImport hook not called") - } - if addr, wantAddr := hook.PostPlanImportAddr, instPlan.Addr; !addr.Equal(wantAddr) { - t.Errorf("expected addr to be %s, but was %s", wantAddr, addr) - } - }) -} - -func TestContext2Plan_importResourceAlreadyInState(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "test_object" "a" { - test_string = "foo" -} - -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - state := states.NewState() - root := state.EnsureModule(addrs.RootModuleInstance) - root.SetResourceInstanceCurrent( - mustResourceInstanceAddr("test_object.a").Resource, - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"test_string":"foo"}`), - }, - mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), - ) - - plan, diags := ctx.Plan(m, state, DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - t.Run(addr.String(), func(t *testing.T) { - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - if got, want := instPlan.Addr, addr; !got.Equal(want) { - t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { - t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - if instPlan.Importing != nil { - t.Errorf("expected non-import change, got import change %#v", instPlan.Importing) - } - }) -} - -func TestContext2Plan_importResourceUpdate(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "test_object" "a" { - test_string = "bar" -} - -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - t.Run(addr.String(), func(t *testing.T) { - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - if got, want := instPlan.Addr, addr; !got.Equal(want) { - t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { - t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.Action, plans.Update; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - if instPlan.Importing.ID != "123" { - t.Errorf("expected import change from \"123\", got non-import change") - } - }) -} - -func TestContext2Plan_importResourceReplace(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "test_object" "a" { - test_string = "bar" -} - -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - ForceReplace: []addrs.AbsResourceInstance{ - addr, - }, - }) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - t.Run(addr.String(), func(t *testing.T) { - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - if got, want := instPlan.Addr, addr; !got.Equal(want) { - t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { - t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.Action, plans.DeleteThenCreate; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if instPlan.Importing.ID != "123" { - t.Errorf("expected import change from \"123\", got non-import change") - } - }) -} - -func TestContext2Plan_importRefreshOnce(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "test_object" "a" { - test_string = "bar" -} - -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - - readCalled := 0 - p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse { - readCalled++ - state, _ := simpleTestSchema().CoerceValue(cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - })) - - return providers.ReadResourceResponse{ - NewState: state, - } - } - - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - ForceReplace: []addrs.AbsResourceInstance{ - addr, - }, - }) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - if readCalled > 1 { - t.Error("ReadResource called multiple times for import") - } -} - -func TestContext2Plan_importTargetWithKeyDoesNotExist(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "test_object" "a" { - count = 1 - test_string = "bar" -} - -import { - to = test_object.a[42] - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if !diags.HasErrors() { - t.Fatalf("expected error but got none") - } -} - -func TestContext2Plan_importIdVariable(t *testing.T) { - p := testProvider("aws") - m := testModule(t, "import-id-variable") - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - }) - - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "aws_instance", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("foo"), - }), - }, - }, - } - - _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - SetVariables: InputValues{ - "the_id": &InputValue{ - // let var take its default value - Value: cty.NilVal, - }, - }, - }) - if diags.HasErrors() { - t.Fatalf("unexpected errors: %s", diags.Err()) - } -} - -func TestContext2Plan_importIdFunc(t *testing.T) { - p := testProvider("aws") - m := testModule(t, "import-id-func") - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - }) - - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "aws_instance", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("foo"), - }), - }, - }, - } - - _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors: %s", diags.Err()) - } -} - -func TestContext2Plan_importIdDataSource(t *testing.T) { - p := testProvider("aws") - m := testModule(t, "import-id-data-source") - - p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ - ResourceTypes: map[string]*configschema.Block{ - "aws_subnet": { - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - DataSources: map[string]*configschema.Block{ - "aws_subnet": { - Attributes: map[string]*configschema.Attribute{ - "vpc_id": { - Type: cty.String, - Required: true, - }, - "cidr_block": { - Type: cty.String, - Computed: true, - }, - "id": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - }) - p.ReadDataSourceResponse = &providers.ReadDataSourceResponse{ - State: cty.ObjectVal(map[string]cty.Value{ - "vpc_id": cty.StringVal("abc"), - "cidr_block": cty.StringVal("10.0.1.0/24"), - "id": cty.StringVal("123"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "aws_subnet", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("foo"), - }), - }, - }, - } - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - }) - - _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors: %s", diags.Err()) - } -} - -func TestContext2Plan_importIdModule(t *testing.T) { - p := testProvider("aws") - m := testModule(t, "import-id-module") - - p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ - ResourceTypes: map[string]*configschema.Block{ - "aws_lb": { - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - }) - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "aws_lb", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("foo"), - }), - }, - }, - } - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - }) - - _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors: %s", diags.Err()) - } -} - -func TestContext2Plan_importIdInvalidNull(t *testing.T) { - p := testProvider("test") - m := testModule(t, "import-id-invalid-null") - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - - _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - SetVariables: InputValues{ - "the_id": &InputValue{ - Value: cty.NullVal(cty.String), - }, - }, - }) - if !diags.HasErrors() { - t.Fatal("succeeded; want errors") - } - if got, want := diags.Err().Error(), "The import ID cannot be null"; !strings.Contains(got, want) { - t.Fatalf("wrong error:\ngot: %s\nwant: message containing %q", got, want) - } -} - -func TestContext2Plan_importIdInvalidUnknown(t *testing.T) { - p := testProvider("test") - m := testModule(t, "import-id-invalid-unknown") - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ - ResourceTypes: map[string]*configschema.Block{ - "test_resource": { - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - }, - }, - }, - }) - p.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse { - return providers.PlanResourceChangeResponse{ - PlannedState: cty.UnknownVal(cty.Object(map[string]cty.Type{ - "id": cty.String, - })), - } - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_resource", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("foo"), - }), - }, - }, - } - - _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if !diags.HasErrors() { - t.Fatal("succeeded; want errors") - } - if got, want := diags.Err().Error(), `The import block "id" argument depends on resource attributes that cannot be determined until apply`; !strings.Contains(got, want) { - t.Fatalf("wrong error:\ngot: %s\nwant: message containing %q", got, want) - } -} - -func TestContext2Plan_importIntoModuleWithGeneratedConfig(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -import { - to = test_object.a - id = "123" -} - -import { - to = module.mod.test_object.a - id = "456" -} - -module "mod" { - source = "./mod" -} -`, - "./mod/main.tf": ` -resource "test_object" "a" { - test_string = "bar" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. - }) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - one := mustResourceInstanceAddr("test_object.a") - two := mustResourceInstanceAddr("module.mod.test_object.a") - - onePlan := plan.Changes.ResourceInstance(one) - twoPlan := plan.Changes.ResourceInstance(two) - - // This test is just to make sure things work e2e with modules and generated - // config, so we're not too careful about the actual responses - we're just - // happy nothing panicked. See the other import tests for actual validation - // of responses and the like. - if twoPlan.Action != plans.Update { - t.Errorf("expected nested item to be updated but was %s", twoPlan.Action) - } - - if len(onePlan.GeneratedConfig) == 0 { - t.Errorf("expected root item to generate config but it didn't") - } -} - -func TestContext2Plan_importResourceConfigGen(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. - }) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - t.Run(addr.String(), func(t *testing.T) { - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - if got, want := instPlan.Addr, addr; !got.Equal(want) { - t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { - t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - if instPlan.Importing.ID != "123" { - t.Errorf("expected import change from \"123\", got non-import change") - } - - want := `resource "test_object" "a" { - test_bool = null - test_list = null - test_map = null - test_number = null - test_string = "foo" -}` - got := instPlan.GeneratedConfig - if diff := cmp.Diff(want, got); len(diff) > 0 { - t.Errorf("got:\n%s\nwant:\n%s\ndiff:\n%s", got, want, diff) - } - }) -} - -func TestContext2Plan_importResourceConfigGenWithAlias(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -provider "test" { - alias = "backup" -} - -import { - provider = test.backup - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. - }) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - t.Run(addr.String(), func(t *testing.T) { - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - if got, want := instPlan.Addr, addr; !got.Equal(want) { - t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.PrevRunAddr, addr; !got.Equal(want) { - t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - if instPlan.Importing.ID != "123" { - t.Errorf("expected import change from \"123\", got non-import change") - } - - want := `resource "test_object" "a" { - provider = test.backup - test_bool = null - test_list = null - test_map = null - test_number = null - test_string = "foo" -}` - got := instPlan.GeneratedConfig - if diff := cmp.Diff(want, got); len(diff) > 0 { - t.Errorf("got:\n%s\nwant:\n%s\ndiff:\n%s", got, want, diff) - } - }) -} - -func TestContext2Plan_importResourceConfigGenExpandedResource(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -import { - to = test_object.a[0] - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - GenerateConfigPath: "generated.tf", - }) - if !diags.HasErrors() { - t.Fatalf("expected plan to error, but it did not") - } -} - -// config generation still succeeds even when planning fails -func TestContext2Plan_importResourceConfigGenWithError(t *testing.T) { - addr := mustResourceInstanceAddr("test_object.a") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -import { - to = test_object.a - id = "123" -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.PlanResourceChangeResponse = &providers.PlanResourceChangeResponse{ - PlannedState: cty.NullVal(cty.DynamicPseudoType), - Diagnostics: tfdiags.Diagnostics(nil).Append(errors.New("plan failed")), - } - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ - Mode: plans.NormalMode, - GenerateConfigPath: "generated.tf", // Actual value here doesn't matter, as long as it is not empty. - }) - if !diags.HasErrors() { - t.Fatal("expected error") - } - - instPlan := plan.Changes.ResourceInstance(addr) - if instPlan == nil { - t.Fatalf("no plan for %s at all", addr) - } - - want := `resource "test_object" "a" { - test_bool = null - test_list = null - test_map = null - test_number = null - test_string = "foo" -}` - got := instPlan.GeneratedConfig - if diff := cmp.Diff(want, got); len(diff) > 0 { - t.Errorf("got:\n%s\nwant:\n%s\ndiff:\n%s", got, want, diff) - } -} - -func TestContext2Plan_importForEach(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -locals { - things = { - first = "first_id" - second = "second_id" - } -} - -resource "test_object" "a" { - for_each = local.things - test_string = "foo" -} - -import { - for_each = local.things - to = test_object.a[each.key] - id = each.value -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - firstAddr := mustResourceInstanceAddr(`test_object.a["first"]`) - secondAddr := mustResourceInstanceAddr(`test_object.a["second"]`) - - for _, instPlan := range plan.Changes.Resources { - switch { - case instPlan.Addr.Equal(firstAddr): - if instPlan.Importing.ID != "first_id" { - t.Errorf("expected import ID of \"first_id\", got %q", instPlan.Importing.ID) - } - case instPlan.Addr.Equal(secondAddr): - if instPlan.Importing.ID != "second_id" { - t.Errorf("expected import ID of \"second_id\", got %q", instPlan.Importing.ID) - } - default: - t.Errorf("unexpected change for %s", instPlan.Addr) - } - - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - } -} - -func TestContext2Plan_importForEachmodule(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -locals { - things = { - brown = "brown_id" - blue = "blue_id" - } -} - -module "sub" { - for_each = local.things - source = "./sub" - things = local.things -} - -import { - for_each = [ - { - mod = "brown" - res = "brown" - id = "brown_brown_id" - }, - { - mod = "brown" - res = "blue" - id = "brown_blue_id" - }, - { - mod = "blue" - res = "brown" - id = "blue_brown_id" - }, - { - mod = "blue" - res = "blue" - id = "blue_blue_id" - }, - ] - to = module.sub[each.value.mod].test_object.a[each.value.res] - id = each.value.id -} -`, - - "./sub/main.tf": ` -variable things { - type = map(string) -} - -locals { - static_id = "foo" -} - -resource "test_object" "a" { - for_each = var.things - test_string = local.static_id -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - brownBlueAddr := mustResourceInstanceAddr(`module.sub["brown"].test_object.a["brown"]`) - brownBrownAddr := mustResourceInstanceAddr(`module.sub["brown"].test_object.a["blue"]`) - blueBlueAddr := mustResourceInstanceAddr(`module.sub["blue"].test_object.a["brown"]`) - blueBrownAddr := mustResourceInstanceAddr(`module.sub["blue"].test_object.a["blue"]`) - - for _, instPlan := range plan.Changes.Resources { - switch { - case instPlan.Addr.Equal(brownBlueAddr): - if instPlan.Importing.ID != "brown_brown_id" { - t.Errorf("expected import ID of \"brown_brown_id\", got %q", instPlan.Importing.ID) - } - case instPlan.Addr.Equal(brownBrownAddr): - if instPlan.Importing.ID != "brown_blue_id" { - t.Errorf("expected import ID of \"brown_blue_id\", got %q", instPlan.Importing.ID) - } - case instPlan.Addr.Equal(blueBlueAddr): - if instPlan.Importing.ID != "blue_brown_id" { - t.Errorf("expected import ID of \"blue_brown_id\", got %q", instPlan.Importing.ID) - } - case instPlan.Addr.Equal(blueBrownAddr): - if instPlan.Importing.ID != "blue_blue_id" { - t.Errorf("expected import ID of \"blue_blue_id\", got %q", instPlan.Importing.ID) - } - default: - t.Errorf("unexpected change for %s", instPlan.Addr) - } - - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - } -} - -func TestContext2Plan_importForEachPartial(t *testing.T) { - // one of the imported instances already exists in the state, which should - // result in a non-import, NoOp change - m := testModuleInline(t, map[string]string{ - "main.tf": ` -locals { - things = { - first = "first_id" - second = "second_id" - } -} - -resource "test_object" "a" { - for_each = local.things - test_string = "foo" -} - -import { - for_each = local.things - to = test_object.a[each.key] - id = each.value -} -`, - }) - - p := simpleMockProvider() - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - state := states.NewState() - root := state.EnsureModule(addrs.RootModuleInstance) - root.SetResourceInstanceCurrent( - mustResourceInstanceAddr(`test_object.a["first"]`).Resource, - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"test_string":"foo"}`), - }, - mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), - ) - - plan, diags := ctx.Plan(m, state, DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - firstAddr := mustResourceInstanceAddr(`test_object.a["first"]`) - secondAddr := mustResourceInstanceAddr(`test_object.a["second"]`) - - for _, instPlan := range plan.Changes.Resources { - switch { - case instPlan.Addr.Equal(firstAddr): - if instPlan.Importing != nil { - t.Errorf("expected no import for %s, got %#v", firstAddr, instPlan.Importing) - } - case instPlan.Addr.Equal(secondAddr): - if instPlan.Importing.ID != "second_id" { - t.Errorf("expected import ID of \"second_id\", got %q", instPlan.Importing.ID) - } - default: - t.Errorf("unexpected change for %s", instPlan.Addr) - } - - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - } -} - -func TestContext2Plan_importForEachFromData(t *testing.T) { - m := testModuleInline(t, map[string]string{ - "main.tf": ` -data "test_object" "d" { -} - -resource "test_object" "a" { - count = 2 - test_string = "foo" -} - -import { - for_each = data.test_object.d.objects - to = test_object.a[each.key] - id = each.value -} -`, - }) - - p := simpleMockProvider() - p.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{ - Provider: providers.Schema{Block: simpleTestSchema()}, - ResourceTypes: map[string]providers.Schema{ - "test_object": providers.Schema{Block: simpleTestSchema()}, - }, - DataSources: map[string]providers.Schema{ - "test_object": providers.Schema{ - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "objects": { - Type: cty.List(cty.String), - Computed: true, - }, - }, - }, - }, - }, - } - - ctx := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - }) - - p.ReadDataSourceResponse = &providers.ReadDataSourceResponse{ - State: cty.ObjectVal(map[string]cty.Value{ - "objects": cty.ListVal([]cty.Value{ - cty.StringVal("first_id"), cty.StringVal("second_id"), - }), - }), - } - - p.ReadResourceResponse = &providers.ReadResourceResponse{ - NewState: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - } - p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_object", - State: cty.ObjectVal(map[string]cty.Value{ - "test_string": cty.StringVal("foo"), - }), - }, - }, - } - - plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) - if diags.HasErrors() { - t.Fatalf("unexpected errors\n%s", diags.Err().Error()) - } - - firstAddr := mustResourceInstanceAddr(`test_object.a[0]`) - secondAddr := mustResourceInstanceAddr(`test_object.a[1]`) - - for _, instPlan := range plan.Changes.Resources { - switch { - case instPlan.Addr.Equal(firstAddr): - if instPlan.Importing.ID != "first_id" { - t.Errorf("expected import ID of \"first_id\", got %q", instPlan.Importing.ID) - } - case instPlan.Addr.Equal(secondAddr): - if instPlan.Importing.ID != "second_id" { - t.Errorf("expected import ID of \"second_id\", got %q", instPlan.Importing.ID) - } - default: - t.Errorf("unexpected change for %s", instPlan.Addr) - } - - if got, want := instPlan.Action, plans.NoOp; got != want { - t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) - } - if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { - t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) - } - } -} diff --git a/internal/terraform/context_validate_test.go b/internal/terraform/context_validate_test.go index 6a93ad28522a..9da63dc469f3 100644 --- a/internal/terraform/context_validate_test.go +++ b/internal/terraform/context_validate_test.go @@ -2485,33 +2485,3 @@ locals { t.Fatalf("expected deprecated warning, got: %q\n", warn) } } - -func TestContext2Validate_unknownForEach(t *testing.T) { - p := testProvider("aws") - m := testModuleInline(t, map[string]string{ - "main.tf": ` -resource "aws_instance" "test" { -} - -locals { - follow = { - (aws_instance.test.id): "follow" - } -} - -resource "aws_instance" "follow" { - for_each = local.follow -} - `, - }) - c := testContext2(t, &ContextOpts{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - }) - - diags := c.Validate(m) - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } -} diff --git a/internal/terraform/context_walk.go b/internal/terraform/context_walk.go index db371d52fa6b..002cd878bfe7 100644 --- a/internal/terraform/context_walk.go +++ b/internal/terraform/context_walk.go @@ -10,7 +10,6 @@ import ( "github.com/hashicorp/terraform/internal/checks" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/instances" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" @@ -42,10 +41,6 @@ type graphWalkOpts struct { // the apply phase. PlanTimeTimestamp time.Time - // Overrides contains the set of overrides we should apply during this - // operation. - Overrides *mocking.Overrides - MoveResults refactoring.MoveResults } @@ -155,6 +150,5 @@ func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *Con Operation: operation, StopContext: c.runContext, PlanTimestamp: opts.PlanTimeTimestamp, - Overrides: opts.Overrides, } } diff --git a/internal/terraform/eval_context.go b/internal/terraform/eval_context.go index e16d034aef0e..cb1735218c7f 100644 --- a/internal/terraform/eval_context.go +++ b/internal/terraform/eval_context.go @@ -5,21 +5,18 @@ package terraform import ( "github.com/hashicorp/hcl/v2" - "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/checks" - "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/lang" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" + "github.com/zclconf/go-cty/cty" ) // EvalContext is the interface that is given to eval nodes to execute. @@ -44,7 +41,7 @@ type EvalContext interface { // It is an error to initialize the same provider more than once. This // method will panic if the module instance address of the given provider // configuration does not match the Path() of the EvalContext. - InitProvider(addr addrs.AbsProviderConfig, configs *configs.Provider) (providers.Interface, error) + InitProvider(addr addrs.AbsProviderConfig) (providers.Interface, error) // Provider gets the provider instance with the given address (already // initialized) or returns nil if the provider isn't initialized. @@ -204,10 +201,6 @@ type EvalContext interface { // objects accessible through it. MoveResults() refactoring.MoveResults - // Overrides contains the modules and resources we should mock as part of - // this execution. - Overrides() *mocking.Overrides - // WithPath returns a copy of the context with the internal path set to the // path argument. WithPath(path addrs.ModuleInstance) EvalContext diff --git a/internal/terraform/eval_context_builtin.go b/internal/terraform/eval_context_builtin.go index e8252dc5c5ba..36b896a87558 100644 --- a/internal/terraform/eval_context_builtin.go +++ b/internal/terraform/eval_context_builtin.go @@ -14,11 +14,9 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/checks" - "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/lang" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" @@ -75,7 +73,6 @@ type BuiltinEvalContext struct { PrevRunStateValue *states.SyncState InstanceExpanderValue *instances.Expander MoveResultsValue refactoring.MoveResults - OverrideValues *mocking.Overrides } // BuiltinEvalContext implements EvalContext @@ -121,7 +118,7 @@ func (ctx *BuiltinEvalContext) Input() UIInput { return ctx.InputValue } -func (ctx *BuiltinEvalContext) InitProvider(addr addrs.AbsProviderConfig, config *configs.Provider) (providers.Interface, error) { +func (ctx *BuiltinEvalContext) InitProvider(addr addrs.AbsProviderConfig) (providers.Interface, error) { // If we already initialized, it is an error if p := ctx.Provider(addr); p != nil { return nil, fmt.Errorf("%s is already initialized", addr) @@ -140,17 +137,6 @@ func (ctx *BuiltinEvalContext) InitProvider(addr addrs.AbsProviderConfig, config } log.Printf("[TRACE] BuiltinEvalContext: Initialized %q provider for %s", addr.String(), addr) - - // The config might be nil, if there was no config block defined for this - // provider. - if config != nil && config.Mock { - log.Printf("[TRACE] BuiltinEvalContext: Mocked %q provider for %s", addr.String(), addr) - p = &providers.Mock{ - Provider: p, - Data: config.MockData, - } - } - ctx.ProviderCache[key] = p return p, nil @@ -522,7 +508,3 @@ func (ctx *BuiltinEvalContext) InstanceExpander() *instances.Expander { func (ctx *BuiltinEvalContext) MoveResults() refactoring.MoveResults { return ctx.MoveResultsValue } - -func (ctx *BuiltinEvalContext) Overrides() *mocking.Overrides { - return ctx.OverrideValues -} diff --git a/internal/terraform/eval_context_builtin_test.go b/internal/terraform/eval_context_builtin_test.go index 47199ae78bf2..bfcc546f4bed 100644 --- a/internal/terraform/eval_context_builtin_test.go +++ b/internal/terraform/eval_context_builtin_test.go @@ -8,11 +8,9 @@ import ( "sync" "testing" - "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/providers" + "github.com/zclconf/go-cty/cty" ) func TestBuiltinEvalContextProviderInput(t *testing.T) { @@ -77,27 +75,15 @@ func TestBuildingEvalContextInitProvider(t *testing.T) { Provider: addrs.NewDefaultProvider("test"), Alias: "foo", } - providerAddrMock := addrs.AbsProviderConfig{ - Module: addrs.RootModule, - Provider: addrs.NewDefaultProvider("test"), - Alias: "mock", - } - _, err := ctx.InitProvider(providerAddrDefault, nil) + _, err := ctx.InitProvider(providerAddrDefault) if err != nil { t.Fatalf("error initializing provider test: %s", err) } - _, err = ctx.InitProvider(providerAddrAlias, nil) + _, err = ctx.InitProvider(providerAddrAlias) if err != nil { t.Fatalf("error initializing provider test.foo: %s", err) } - - _, err = ctx.InitProvider(providerAddrMock, &configs.Provider{ - Mock: true, - }) - if err != nil { - t.Fatalf("error initializing provider test.mock: %s", err) - } } func testBuiltinEvalContext(t *testing.T) *BuiltinEvalContext { diff --git a/internal/terraform/eval_context_mock.go b/internal/terraform/eval_context_mock.go index 18d174c4aa81..19829feaf6cb 100644 --- a/internal/terraform/eval_context_mock.go +++ b/internal/terraform/eval_context_mock.go @@ -6,22 +6,19 @@ package terraform import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hcldec" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" - "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/checks" - "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/lang" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" + "github.com/zclconf/go-cty/cty" + "github.com/zclconf/go-cty/cty/convert" ) // MockEvalContext is a mock version of EvalContext that can be used @@ -154,9 +151,6 @@ type MockEvalContext struct { InstanceExpanderCalled bool InstanceExpanderExpander *instances.Expander - - OverridesCalled bool - OverrideValues *mocking.Overrides } // MockEvalContext implements EvalContext @@ -183,7 +177,7 @@ func (c *MockEvalContext) Input() UIInput { return c.InputInput } -func (c *MockEvalContext) InitProvider(addr addrs.AbsProviderConfig, _ *configs.Provider) (providers.Interface, error) { +func (c *MockEvalContext) InitProvider(addr addrs.AbsProviderConfig) (providers.Interface, error) { c.InitProviderCalled = true c.InitProviderType = addr.String() c.InitProviderAddr = addr @@ -408,8 +402,3 @@ func (c *MockEvalContext) InstanceExpander() *instances.Expander { c.InstanceExpanderCalled = true return c.InstanceExpanderExpander } - -func (c *MockEvalContext) Overrides() *mocking.Overrides { - c.OverridesCalled = true - return c.OverrideValues -} diff --git a/internal/terraform/eval_for_each.go b/internal/terraform/eval_for_each.go index 2b5b4a7857c1..ae9e169f4a26 100644 --- a/internal/terraform/eval_for_each.go +++ b/internal/terraform/eval_for_each.go @@ -10,294 +10,151 @@ import ( "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/lang" "github.com/hashicorp/terraform/internal/lang/marks" "github.com/hashicorp/terraform/internal/tfdiags" ) +// evaluateForEachExpression is our standard mechanism for interpreting an +// expression given for a "for_each" argument on a resource or a module. This +// should be called during expansion in order to determine the final keys and +// values. +// // evaluateForEachExpression differs from evaluateForEachExpressionValue by // returning an error if the count value is not known, and converting the // cty.Value to a map[string]cty.Value for compatibility with other calls. func evaluateForEachExpression(expr hcl.Expression, ctx EvalContext) (forEach map[string]cty.Value, diags tfdiags.Diagnostics) { - return newForEachEvaluator(expr, ctx).ResourceValue() -} - -// rorEachEvaluator is the standard mechanism for interpreting an expression -// given for a "for_each" argument on a resource, module, or import. -func newForEachEvaluator(expr hcl.Expression, ctx EvalContext) *forEachEvaluator { - if ctx == nil { - panic("nil EvalContext") - } - - return &forEachEvaluator{ - ctx: ctx, - expr: expr, - } -} - -// forEachEvaluator is responsible for evaluating for_each expressions, using -// different rules depending on the desired context. -type forEachEvaluator struct { - // We bundle this functionality into a structure, because internal - // validation requires not only the resulting value, but also the original - // expression and the hcl EvalContext to build the corresponding - // diagnostic. Every method's dependency on all the evaluation pieces - // otherwise prevents refactoring and we end up with a single giant - // function. - ctx EvalContext - expr hcl.Expression - - // internal - hclCtx *hcl.EvalContext -} - -// ResourceForEachValue returns a known for_each map[string]cty.Value -// appropriate for use within resource expansion. -func (ev *forEachEvaluator) ResourceValue() (map[string]cty.Value, tfdiags.Diagnostics) { - res := map[string]cty.Value{} - - // no expression always results in an empty map - if ev.expr == nil { - return res, nil - } - - forEachVal, diags := ev.Value() - if diags.HasErrors() { - return res, diags - } - - // ensure our value is known for use in resource expansion - diags = diags.Append(ev.ensureKnownForResource(forEachVal)) - if diags.HasErrors() { - return res, diags - } - - // validate the for_each value for use in resource expansion - diags = diags.Append(ev.validateResource(forEachVal)) - if diags.HasErrors() { - return res, diags - } + forEachVal, diags := evaluateForEachExpressionValue(expr, ctx, false) + // forEachVal might be unknown, but if it is then there should already + // be an error about it in diags, which we'll return below. if forEachVal.IsNull() || !forEachVal.IsKnown() || markSafeLengthInt(forEachVal) == 0 { - // we check length, because an empty set returns a nil map which will panic below - return res, diags + // we check length, because an empty set return a nil map + return map[string]cty.Value{}, diags } - res = forEachVal.AsValueMap() - return res, diags + return forEachVal.AsValueMap(), diags } -// ImportValue returns the for_each map for use within an import block, -// enumerated as individual instances.RepetitionData values. -func (ev *forEachEvaluator) ImportValues() ([]instances.RepetitionData, tfdiags.Diagnostics) { - var res []instances.RepetitionData - if ev.expr == nil { - return res, nil - } - - forEachVal, diags := ev.Value() - if diags.HasErrors() { - return res, diags - } - - // ensure our value is known for use in resource expansion - diags = diags.Append(ev.ensureKnownForImport(forEachVal)) - if diags.HasErrors() { - return res, diags - } - - if forEachVal.IsNull() { - return res, diags - } - - val, marks := forEachVal.Unmark() - - it := val.ElementIterator() - for it.Next() { - k, v := it.Element() - res = append(res, instances.RepetitionData{ - EachKey: k, - EachValue: v.WithMarks(marks), - }) - - } - - return res, diags -} - -// Value returns the raw cty.Value evaluated from the given for_each expression -func (ev *forEachEvaluator) Value() (cty.Value, tfdiags.Diagnostics) { +// evaluateForEachExpressionValue is like evaluateForEachExpression +// except that it returns a cty.Value map or set which can be unknown. +func evaluateForEachExpressionValue(expr hcl.Expression, ctx EvalContext, allowUnknown bool) (cty.Value, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics + nullMap := cty.NullVal(cty.Map(cty.DynamicPseudoType)) - if ev.expr == nil { - // a nil expression always results in a null value - return cty.NullVal(cty.Map(cty.DynamicPseudoType)), nil + if expr == nil { + return nullMap, diags } - refs, moreDiags := lang.ReferencesInExpr(addrs.ParseRef, ev.expr) + refs, moreDiags := lang.ReferencesInExpr(addrs.ParseRef, expr) diags = diags.Append(moreDiags) - scope := ev.ctx.EvaluationScope(nil, nil, EvalDataForNoInstanceKey) + scope := ctx.EvaluationScope(nil, nil, EvalDataForNoInstanceKey) + var hclCtx *hcl.EvalContext if scope != nil { - ev.hclCtx, moreDiags = scope.EvalContext(refs) + hclCtx, moreDiags = scope.EvalContext(refs) } else { // This shouldn't happen in real code, but it can unfortunately arise // in unit tests due to incompletely-implemented mocks. :( - ev.hclCtx = &hcl.EvalContext{} + hclCtx = &hcl.EvalContext{} } - diags = diags.Append(moreDiags) if diags.HasErrors() { // Can't continue if we don't even have a valid scope - return cty.DynamicVal, diags + return nullMap, diags } - forEachVal, forEachDiags := ev.expr.Value(ev.hclCtx) + forEachVal, forEachDiags := expr.Value(hclCtx) diags = diags.Append(forEachDiags) - return forEachVal, diags -} - -// ensureKnownForImport checks that the value is entirely known for use within -// import expansion. -func (ev *forEachEvaluator) ensureKnownForImport(forEachVal cty.Value) tfdiags.Diagnostics { - var diags tfdiags.Diagnostics - - if !forEachVal.IsWhollyKnown() { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid for_each argument", - Detail: "The \"for_each\" expression includes values derived from other resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of values that might be used to import this resource.", - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, - Extra: diagnosticCausedByUnknown(true), - }) - } - return diags -} - -// ensureKnownForResource checks that the value is known within the rules of -// resource and module expansion. -func (ev *forEachEvaluator) ensureKnownForResource(forEachVal cty.Value) tfdiags.Diagnostics { - var diags tfdiags.Diagnostics - ty := forEachVal.Type() - const errInvalidUnknownDetailMap = "The \"for_each\" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.\n\nWhen working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.\n\nAlternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge." - const errInvalidUnknownDetailSet = "The \"for_each\" set includes values derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.\n\nWhen working with unknown values in for_each, it's better to use a map value where the keys are defined statically in your configuration and where only the values contain apply-time results.\n\nAlternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge." - - if !forEachVal.IsKnown() { - var detailMsg string - switch { - case ty.IsSetType(): - detailMsg = errInvalidUnknownDetailSet - default: - detailMsg = errInvalidUnknownDetailMap - } - - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid for_each argument", - Detail: detailMsg, - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, - Extra: diagnosticCausedByUnknown(true), - }) - return diags - } - - if ty.IsSetType() && !forEachVal.IsWhollyKnown() { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid for_each argument", - Detail: errInvalidUnknownDetailSet, - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, - Extra: diagnosticCausedByUnknown(true), - }) - } - return diags -} - -// ValidateResourceValue is used from validation walks to verify the validity -// of the resource for_Each expression, while still allowing for unknown -// values. -func (ev *forEachEvaluator) ValidateResourceValue() tfdiags.Diagnostics { - val, diags := ev.Value() - if diags.HasErrors() { - return diags - } - - return diags.Append(ev.validateResource(val)) -} - -// validateResource validates the type and values of the forEachVal, while -// still allowing unknown values for use within the validation walk. -func (ev *forEachEvaluator) validateResource(forEachVal cty.Value) tfdiags.Diagnostics { - var diags tfdiags.Diagnostics - + // If a whole map is marked, or a set contains marked values (which means the set is then marked) // give an error diagnostic as this value cannot be used in for_each if forEachVal.HasMark(marks.Sensitive) { diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, Summary: "Invalid for_each argument", Detail: "Sensitive values, or values derived from sensitive values, cannot be used as for_each arguments. If used, the sensitive value could be exposed as a resource instance key.", - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, Extra: diagnosticCausedBySensitive(true), }) } if diags.HasErrors() { - return diags + return nullMap, diags } ty := forEachVal.Type() + const errInvalidUnknownDetailMap = "The \"for_each\" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.\n\nWhen working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.\n\nAlternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge." + const errInvalidUnknownDetailSet = "The \"for_each\" set includes values derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.\n\nWhen working with unknown values in for_each, it's better to use a map value where the keys are defined statically in your configuration and where only the values contain apply-time results.\n\nAlternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge." + switch { case forEachVal.IsNull(): diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, Summary: "Invalid for_each argument", Detail: `The given "for_each" argument value is unsuitable: the given "for_each" argument value is null. A map, or set of strings is allowed.`, - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, }) - return diags + return nullMap, diags + case !forEachVal.IsKnown(): + if !allowUnknown { + var detailMsg string + switch { + case ty.IsSetType(): + detailMsg = errInvalidUnknownDetailSet + default: + detailMsg = errInvalidUnknownDetailMap + } - case forEachVal.Type() == cty.DynamicPseudoType: - // We may not have any type information if this is during validation, - // so we need to return early. During plan this can't happen because we - // validate for unknowns first. - return diags + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid for_each argument", + Detail: detailMsg, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, + Extra: diagnosticCausedByUnknown(true), + }) + } + // ensure that we have a map, and not a DynamicValue + return cty.UnknownVal(cty.Map(cty.DynamicPseudoType)), diags case !(ty.IsMapType() || ty.IsSetType() || ty.IsObjectType()): diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, Summary: "Invalid for_each argument", Detail: fmt.Sprintf(`The given "for_each" argument value is unsuitable: the "for_each" argument must be a map, or set of strings, and you have provided a value of type %s.`, ty.FriendlyName()), - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, }) - return diags - - case !forEachVal.IsKnown(): - return diags + return nullMap, diags case markSafeLengthInt(forEachVal) == 0: // If the map is empty ({}), return an empty map, because cty will // return nil when representing {} AsValueMap. This also covers an empty // set (toset([])) - return diags + return forEachVal, diags } if ty.IsSetType() { // since we can't use a set values that are unknown, we treat the // entire set as unknown if !forEachVal.IsWhollyKnown() { - return diags + if !allowUnknown { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid for_each argument", + Detail: errInvalidUnknownDetailSet, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, + Extra: diagnosticCausedByUnknown(true), + }) + } + return cty.UnknownVal(ty), diags } if ty.ElementType() != cty.String { @@ -305,11 +162,11 @@ func (ev *forEachEvaluator) validateResource(forEachVal cty.Value) tfdiags.Diagn Severity: hcl.DiagError, Summary: "Invalid for_each set argument", Detail: fmt.Sprintf(`The given "for_each" argument value is unsuitable: "for_each" supports maps and sets of strings, but you have provided a set containing type %s.`, forEachVal.Type().ElementType().FriendlyName()), - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, }) - return diags + return cty.NullVal(ty), diags } // A set of strings may contain null, which makes it impossible to @@ -322,16 +179,16 @@ func (ev *forEachEvaluator) validateResource(forEachVal cty.Value) tfdiags.Diagn Severity: hcl.DiagError, Summary: "Invalid for_each set argument", Detail: `The given "for_each" argument value is unsuitable: "for_each" sets must not contain null values.`, - Subject: ev.expr.Range().Ptr(), - Expression: ev.expr, - EvalContext: ev.hclCtx, + Subject: expr.Range().Ptr(), + Expression: expr, + EvalContext: hclCtx, }) - return diags + return cty.NullVal(ty), diags } } } - return diags + return forEachVal, nil } // markSafeLengthInt allows calling LengthInt on marked values safely diff --git a/internal/terraform/eval_for_each_test.go b/internal/terraform/eval_for_each_test.go index 3f7717d09015..b6f5629ebb00 100644 --- a/internal/terraform/eval_for_each_test.go +++ b/internal/terraform/eval_for_each_test.go @@ -179,7 +179,7 @@ func TestEvaluateForEachExpression_errors(t *testing.T) { _, diags := evaluateForEachExpression(test.Expr, ctx) if len(diags) != 1 { - t.Fatalf("got %d diagnostics; want 1", len(diags)) + t.Fatalf("got %d diagnostics; want 1", diags) } if got, want := diags[0].Severity(), tfdiags.Error; got != want { t.Errorf("wrong diagnostic severity %#v; want %#v", got, want) @@ -221,11 +221,15 @@ func TestEvaluateForEachExpressionKnown(t *testing.T) { t.Run(name, func(t *testing.T) { ctx := &MockEvalContext{} ctx.installSimpleEval() - diags := newForEachEvaluator(expr, ctx).ValidateResourceValue() + forEachVal, diags := evaluateForEachExpressionValue(expr, ctx, true) if len(diags) != 0 { t.Errorf("unexpected diagnostics %s", spew.Sdump(diags)) } + + if forEachVal.IsKnown() { + t.Error("got known, want unknown") + } }) } } diff --git a/internal/terraform/eval_import.go b/internal/terraform/eval_import.go index aeebf37f3451..ed024f31bc55 100644 --- a/internal/terraform/eval_import.go +++ b/internal/terraform/eval_import.go @@ -7,16 +7,13 @@ import ( "fmt" "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/instances" - "github.com/hashicorp/terraform/internal/lang/marks" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/gocty" ) -func evaluateImportIdExpression(expr hcl.Expression, ctx EvalContext, keyData instances.RepetitionData) (string, tfdiags.Diagnostics) { +func evaluateImportIdExpression(expr hcl.Expression, ctx EvalContext) (string, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics // import blocks only exist in the root module, and must be evaluated in @@ -32,8 +29,7 @@ func evaluateImportIdExpression(expr hcl.Expression, ctx EvalContext, keyData in }) } - scope := ctx.EvaluationScope(nil, nil, keyData) - importIdVal, evalDiags := scope.EvalExpr(expr, cty.String) + importIdVal, evalDiags := ctx.EvaluateExpr(expr, cty.String, nil) diags = diags.Append(evalDiags) if importIdVal.IsNull() { @@ -57,10 +53,6 @@ func evaluateImportIdExpression(expr hcl.Expression, ctx EvalContext, keyData in }) } - // Import data may have marks, which we can discard because the id is only - // sent to the provider. - importIdVal, _ = importIdVal.Unmark() - var importId string err := gocty.FromCtyValue(importIdVal, &importId) if err != nil { @@ -74,126 +66,3 @@ func evaluateImportIdExpression(expr hcl.Expression, ctx EvalContext, keyData in return importId, diags } - -func evalImportToExpression(expr hcl.Expression, keyData instances.RepetitionData) (addrs.AbsResourceInstance, tfdiags.Diagnostics) { - var res addrs.AbsResourceInstance - var diags tfdiags.Diagnostics - - traversal, diags := importToExprToTraversal(expr, keyData) - if diags.HasErrors() { - return res, diags - } - - target, targetDiags := addrs.ParseTarget(traversal) - diags = diags.Append(targetDiags) - if diags.HasErrors() { - return res, targetDiags - } - - switch sub := target.Subject.(type) { - case addrs.AbsResource: - res = sub.Instance(addrs.NoKey) - case addrs.AbsResourceInstance: - res = sub - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid import 'to' expression", - Detail: fmt.Sprintf("The import block 'to' argument %s does not resolve to a single resource instance.", sub), - Subject: expr.Range().Ptr(), - }) - } - - return res, diags -} - -// trggersExprToTraversal takes an hcl expression limited to the syntax allowed -// in replace_triggered_by, and converts it to a static traversal. The -// RepetitionData contains the data necessary to evaluate the only allowed -// variables in the expression, count.index and each.key. -func importToExprToTraversal(expr hcl.Expression, keyData instances.RepetitionData) (hcl.Traversal, tfdiags.Diagnostics) { - var trav hcl.Traversal - var diags tfdiags.Diagnostics - - switch e := expr.(type) { - case *hclsyntax.RelativeTraversalExpr: - t, d := importToExprToTraversal(e.Source, keyData) - diags = diags.Append(d) - trav = append(trav, t...) - trav = append(trav, e.Traversal...) - - case *hclsyntax.ScopeTraversalExpr: - // a static reference, we can just append the traversal - trav = append(trav, e.Traversal...) - - case *hclsyntax.IndexExpr: - // Get the collection from the index expression - t, d := importToExprToTraversal(e.Collection, keyData) - diags = diags.Append(d) - if diags.HasErrors() { - return nil, diags - } - trav = append(trav, t...) - - // The index key is the only place where we could have variables that - // reference count and each, so we need to parse those independently. - idx, hclDiags := parseImportToKeyExpression(e.Key, keyData) - diags = diags.Append(hclDiags) - - trav = append(trav, idx) - - default: - // if we don't recognise the expression type (which means we are likely - // dealing with a test mock), try and interpret this as an absolute - // traversal - t, d := hcl.AbsTraversalForExpr(e) - diags = diags.Append(d) - trav = append(trav, t...) - } - - return trav, diags -} - -// parseImportToKeyExpression takes an hcl.Expression and parses it as an index key, while -// evaluating any references to count.index or each.key. -func parseImportToKeyExpression(expr hcl.Expression, keyData instances.RepetitionData) (hcl.TraverseIndex, hcl.Diagnostics) { - idx := hcl.TraverseIndex{ - SrcRange: expr.Range(), - } - - ctx := &hcl.EvalContext{ - Variables: map[string]cty.Value{ - "each": cty.ObjectVal(map[string]cty.Value{ - "key": keyData.EachKey, - "value": keyData.EachValue, - }), - }, - } - - val, diags := expr.Value(ctx) - if diags.HasErrors() { - // catch the most common case of an unsupported variable and try to - // give the user a slightly more helpful error - for i := range diags { - if diags[i].Summary == "Unknown variable" { - diags[i].Detail += "Only \"each.key\" and \"each.value\" can be used in import address index expressions." - } - } - - return idx, diags - } - - if val.HasMark(marks.Sensitive) { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid index expression", - Detail: "Import address index expression cannot be sensitive.", - Subject: expr.Range().Ptr(), - }) - return idx, diags - } - - idx.Key = val - return idx, nil - -} diff --git a/internal/terraform/evaluate_triggers.go b/internal/terraform/evaluate_triggers.go index 1d615d025da7..4cc0b12d8d65 100644 --- a/internal/terraform/evaluate_triggers.go +++ b/internal/terraform/evaluate_triggers.go @@ -60,7 +60,7 @@ func triggersExprToTraversal(expr hcl.Expression, keyData instances.RepetitionDa // The index key is the only place where we could have variables that // reference count and each, so we need to parse those independently. - idx, hclDiags := parseReplaceTriggeredByKeyExpr(e.Key, keyData) + idx, hclDiags := parseIndexKeyExpr(e.Key, keyData) diags = diags.Append(hclDiags) trav = append(trav, idx) @@ -80,9 +80,9 @@ func triggersExprToTraversal(expr hcl.Expression, keyData instances.RepetitionDa return trav, diags } -// parseReplaceTriggeredByKeyExpr takes an hcl.Expression and parses it as an index key, while +// parseIndexKeyExpr takes an hcl.Expression and parses it as an index key, while // evaluating any references to count.index or each.key. -func parseReplaceTriggeredByKeyExpr(expr hcl.Expression, keyData instances.RepetitionData) (hcl.TraverseIndex, hcl.Diagnostics) { +func parseIndexKeyExpr(expr hcl.Expression, keyData instances.RepetitionData) (hcl.TraverseIndex, hcl.Diagnostics) { idx := hcl.TraverseIndex{ SrcRange: expr.Range(), } diff --git a/internal/terraform/graph.go b/internal/terraform/graph.go index 199078492bcd..f594d07c0825 100644 --- a/internal/terraform/graph.go +++ b/internal/terraform/graph.go @@ -8,10 +8,7 @@ import ( "log" "strings" - "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/internal/logging" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/hashicorp/terraform/internal/addrs" @@ -76,12 +73,6 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { defer walker.ExitPath(pn.Path()) } - if g.checkAndApplyOverrides(ctx.Overrides(), v) { - // We can skip whole vertices if they are in a module that has been - // overridden. - return - } - // If the node is exec-able, then execute it. if ev, ok := v.(GraphNodeExecutable); ok { diags = diags.Append(walker.Execute(vertexCtx, ev)) @@ -94,10 +85,10 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { if ev, ok := v.(GraphNodeDynamicExpandable); ok { log.Printf("[TRACE] vertex %q: expanding dynamic subgraph", dag.VertexName(v)) - g, moreDiags := ev.DynamicExpand(vertexCtx) - diags = diags.Append(moreDiags) + g, err := ev.DynamicExpand(vertexCtx) + diags = diags.Append(err) if diags.HasErrors() { - log.Printf("[TRACE] vertex %q: failed expanding dynamic subgraph: %s", dag.VertexName(v), diags.Err()) + log.Printf("[TRACE] vertex %q: failed expanding dynamic subgraph: %s", dag.VertexName(v), err) return } if g != nil { @@ -145,97 +136,3 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { return g.AcyclicGraph.Walk(walkFn) } - -// checkAndApplyOverrides checks if target has any data that needs to be overridden. -// -// If this function returns true, then the whole vertex should be skipped and -// not executed. -// -// The logic for a vertex is that if it is within an overridden module then we -// don't want to execute it. Instead, we want to just set the values on the -// output nodes for that module directly. So if a node is a -// GraphNodeModuleInstance we want to skip it if there is an entry in our -// overrides data structure that either matches the module for the vertex or -// is a parent of the module for the vertex. -// -// We also want to actually set the new values for any outputs, resources or -// data sources we encounter that should be overridden. -func (g *Graph) checkAndApplyOverrides(overrides *mocking.Overrides, target dag.Vertex) bool { - if overrides.Empty() { - return false - } - - switch v := target.(type) { - case GraphNodeOverridable: - // For resource and data sources, we want to skip them completely if - // they are within an overridden module. - resourceInstance := v.ResourceInstanceAddr() - if overrides.IsOverridden(resourceInstance.Module) { - return true - } - - if override, ok := overrides.GetOverrideInclProviders(resourceInstance, v.ConfigProvider()); ok { - v.SetOverride(override) - return false - } - - if override, ok := overrides.GetOverrideInclProviders(resourceInstance.ContainingResource(), v.ConfigProvider()); ok { - v.SetOverride(override) - return false - } - - case *NodeApplyableOutput: - // For outputs, we want to skip them completely if they are deeply - // nested within an overridden module. - module := v.Path() - if overrides.IsDeeplyOverridden(module) { - // If the output is deeply nested under an overridden module we want - // to skip - return true - } - - setOverride := func(values cty.Value) { - key := v.Addr.OutputValue.Name - if values.Type().HasAttribute(key) { - v.override = values.GetAttr(key) - } else { - // If we don't have a value provided for an output, then we'll - // just set it to be null. - // - // TODO(liamcervante): Can we generate a value here? Probably - // not as we don't know the type. - v.override = cty.NullVal(cty.DynamicPseudoType) - } - } - - // Otherwise, if we are in a directly overridden module then we want to - // apply the overridden output values. - if override, ok := overrides.GetOverride(module); ok { - setOverride(override.Values) - return false - } - - lastStepInstanced := len(module) > 0 && module[len(module)-1].InstanceKey != addrs.NoKey - if lastStepInstanced { - // Then we could have overridden all the instances of this module. - if override, ok := overrides.GetOverride(module.ContainingModule()); ok { - setOverride(override.Values) - return false - } - } - - case GraphNodeModuleInstance: - // Then this node is simply in a module. It might be that this entire - // module has been overridden, in which case this node shouldn't - // execute. - // - // We checked for resources and outputs earlier, so we know this isn't - // anything special. - module := v.Path() - if overrides.IsOverridden(module) { - return true - } - } - - return false -} diff --git a/internal/terraform/graph_builder_plan.go b/internal/terraform/graph_builder_plan.go index 927be261176f..1562874de2cd 100644 --- a/internal/terraform/graph_builder_plan.go +++ b/internal/terraform/graph_builder_plan.go @@ -340,6 +340,13 @@ func (b *PlanGraphBuilder) initImport() { // as the new state, and users are not expecting the import process // to update any other instances in state. skipRefresh: true, + + // If we get here, we know that we are in legacy import mode, and + // that the user has run the import command rather than plan. + // This flag must be propagated down to the + // NodePlannableResourceInstance so we can ignore the new import + // behaviour. + legacyImportMode: true, } } } diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index 1d1b205b8962..d56f88f057ee 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -15,7 +15,6 @@ import ( "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/instances" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" @@ -44,7 +43,6 @@ type ContextGraphWalker struct { RootVariableValues InputValues Config *configs.Config PlanTimestamp time.Time - Overrides *mocking.Overrides // This is an output. Do not set this, nor read it while a graph walk // is in progress. @@ -116,7 +114,6 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { Evaluator: evaluator, VariableValues: w.variableValues, VariableValuesLock: &w.variableValuesLock, - OverrideValues: w.Overrides, } return ctx diff --git a/internal/terraform/node_check.go b/internal/terraform/node_check.go index 6e0c4ed6408d..fe321d498167 100644 --- a/internal/terraform/node_check.go +++ b/internal/terraform/node_check.go @@ -82,7 +82,7 @@ func (n *nodeExpandCheck) ModulePath() addrs.Module { return n.addr.Module } -func (n *nodeExpandCheck) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.Diagnostics) { +func (n *nodeExpandCheck) DynamicExpand(ctx EvalContext) (*Graph, error) { exp := ctx.InstanceExpander() modInsts := exp.ExpandModule(n.ModulePath()) diff --git a/internal/terraform/node_local.go b/internal/terraform/node_local.go index 7d955e514f53..37625b7b48fb 100644 --- a/internal/terraform/node_local.go +++ b/internal/terraform/node_local.go @@ -66,7 +66,7 @@ func (n *nodeExpandLocal) References() []*addrs.Reference { return refs } -func (n *nodeExpandLocal) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.Diagnostics) { +func (n *nodeExpandLocal) DynamicExpand(ctx EvalContext) (*Graph, error) { var g Graph expander := ctx.InstanceExpander() for _, module := range expander.ExpandModule(n.Module) { diff --git a/internal/terraform/node_module_expand.go b/internal/terraform/node_module_expand.go index 3ed94fff4f16..716fedddafee 100644 --- a/internal/terraform/node_module_expand.go +++ b/internal/terraform/node_module_expand.go @@ -241,7 +241,7 @@ func (n *nodeValidateModule) Execute(ctx EvalContext, op walkOperation) (diags t diags = diags.Append(countDiags) case n.ModuleCall.ForEach != nil: - forEachDiags := newForEachEvaluator(n.ModuleCall.ForEach, ctx).ValidateResourceValue() + _, forEachDiags := evaluateForEachExpressionValue(n.ModuleCall.ForEach, ctx, true) diags = diags.Append(forEachDiags) } diff --git a/internal/terraform/node_module_variable.go b/internal/terraform/node_module_variable.go index 7237fb84b322..4d4819563cf9 100644 --- a/internal/terraform/node_module_variable.go +++ b/internal/terraform/node_module_variable.go @@ -50,7 +50,7 @@ func (n *nodeExpandModuleVariable) temporaryValue() bool { return true } -func (n *nodeExpandModuleVariable) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.Diagnostics) { +func (n *nodeExpandModuleVariable) DynamicExpand(ctx EvalContext) (*Graph, error) { var g Graph // If this variable has preconditions, we need to report these checks now. diff --git a/internal/terraform/node_output.go b/internal/terraform/node_output.go index 249365a65f3c..c1ea0776121f 100644 --- a/internal/terraform/node_output.go +++ b/internal/terraform/node_output.go @@ -54,7 +54,7 @@ func (n *nodeExpandOutput) temporaryValue() bool { return !n.Module.IsRoot() } -func (n *nodeExpandOutput) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.Diagnostics) { +func (n *nodeExpandOutput) DynamicExpand(ctx EvalContext) (*Graph, error) { expander := ctx.InstanceExpander() changes := ctx.Changes() @@ -205,9 +205,6 @@ type NodeApplyableOutput struct { DestroyApply bool Planning bool - - // override is set by the graph itself, just before this node executes. - override cty.Value } var ( @@ -325,9 +322,7 @@ func (n *NodeApplyableOutput) Execute(ctx EvalContext, op walkOperation) (diags // Checks are not evaluated during a destroy. The checks may fail, may not // be valid, or may not have been registered at all. - // We also don't evaluate checks for overridden outputs. This is because - // any references within the checks will likely not have been created. - if !n.DestroyApply && n.override == cty.NilVal { + if !n.DestroyApply { checkRuleSeverity := tfdiags.Error if n.RefreshOnly { checkRuleSeverity = tfdiags.Warning @@ -347,38 +342,32 @@ func (n *NodeApplyableOutput) Execute(ctx EvalContext, op walkOperation) (diags // If there was no change recorded, or the recorded change was not wholly // known, then we need to re-evaluate the output if !changeRecorded || !val.IsWhollyKnown() { - - // First, we check if we have an overridden value. If we do, then we - // use that and we don't try and evaluate the underlying expression. - val = n.override - if val == cty.NilVal { - // This has to run before we have a state lock, since evaluation also - // reads the state - var evalDiags tfdiags.Diagnostics - val, evalDiags = ctx.EvaluateExpr(n.Config.Expr, cty.DynamicPseudoType, nil) - diags = diags.Append(evalDiags) - - // We'll handle errors below, after we have loaded the module. - // Outputs don't have a separate mode for validation, so validate - // depends_on expressions here too - diags = diags.Append(validateDependsOn(ctx, n.Config.DependsOn)) - - // For root module outputs in particular, an output value must be - // statically declared as sensitive in order to dynamically return - // a sensitive result, to help avoid accidental exposure in the state - // of a sensitive value that the user doesn't want to include there. - if n.Addr.Module.IsRoot() { - if !n.Config.Sensitive && marks.Contains(val, marks.Sensitive) { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Output refers to sensitive values", - Detail: `To reduce the risk of accidentally exporting sensitive data that was intended to be only internal, Terraform requires that any root module output containing sensitive data be explicitly marked as sensitive, to confirm your intent. + // This has to run before we have a state lock, since evaluation also + // reads the state + var evalDiags tfdiags.Diagnostics + val, evalDiags = ctx.EvaluateExpr(n.Config.Expr, cty.DynamicPseudoType, nil) + diags = diags.Append(evalDiags) + + // We'll handle errors below, after we have loaded the module. + // Outputs don't have a separate mode for validation, so validate + // depends_on expressions here too + diags = diags.Append(validateDependsOn(ctx, n.Config.DependsOn)) + + // For root module outputs in particular, an output value must be + // statically declared as sensitive in order to dynamically return + // a sensitive result, to help avoid accidental exposure in the state + // of a sensitive value that the user doesn't want to include there. + if n.Addr.Module.IsRoot() { + if !n.Config.Sensitive && marks.Contains(val, marks.Sensitive) { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Output refers to sensitive values", + Detail: `To reduce the risk of accidentally exporting sensitive data that was intended to be only internal, Terraform requires that any root module output containing sensitive data be explicitly marked as sensitive, to confirm your intent. If you do intend to export this data, annotate the output value as sensitive by adding the following argument: sensitive = true`, - Subject: n.Config.DeclRange.Ptr(), - }) - } + Subject: n.Config.DeclRange.Ptr(), + }) } } } diff --git a/internal/terraform/node_overridable.go b/internal/terraform/node_overridable.go deleted file mode 100644 index de8da889fb61..000000000000 --- a/internal/terraform/node_overridable.go +++ /dev/null @@ -1,18 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: BUSL-1.1 - -package terraform - -import ( - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" -) - -// GraphNodeOverridable represents a node in the graph that can be overridden -// by the testing framework. -type GraphNodeOverridable interface { - GraphNodeResourceInstance - - ConfigProvider() addrs.AbsProviderConfig - SetOverride(override *configs.Override) -} diff --git a/internal/terraform/node_provider.go b/internal/terraform/node_provider.go index ce2715a09a74..36e8e3a5c051 100644 --- a/internal/terraform/node_provider.go +++ b/internal/terraform/node_provider.go @@ -8,11 +8,10 @@ import ( "log" "github.com/hashicorp/hcl/v2" - "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/tfdiags" + "github.com/zclconf/go-cty/cty" ) // NodeApplyableProvider represents a provider during an apply. @@ -26,7 +25,7 @@ var ( // GraphNodeExecutable func (n *NodeApplyableProvider) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) { - _, err := ctx.InitProvider(n.Addr, n.Config) + _, err := ctx.InitProvider(n.Addr) diags = diags.Append(err) if diags.HasErrors() { return diags diff --git a/internal/terraform/node_provider_eval.go b/internal/terraform/node_provider_eval.go index 87dd299ede8d..98c8946a3176 100644 --- a/internal/terraform/node_provider_eval.go +++ b/internal/terraform/node_provider_eval.go @@ -17,6 +17,6 @@ var _ GraphNodeExecutable = (*NodeEvalableProvider)(nil) // GraphNodeExecutable func (n *NodeEvalableProvider) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) { - _, err := ctx.InitProvider(n.Addr, n.Config) + _, err := ctx.InitProvider(n.Addr) return diags.Append(err) } diff --git a/internal/terraform/node_resource_abstract.go b/internal/terraform/node_resource_abstract.go index b8ffec310d27..4bf7e989595e 100644 --- a/internal/terraform/node_resource_abstract.go +++ b/internal/terraform/node_resource_abstract.go @@ -122,7 +122,6 @@ var ( _ GraphNodeAttachProvisionerSchema = (*NodeAbstractResourceInstance)(nil) _ GraphNodeAttachProviderMetaConfigs = (*NodeAbstractResourceInstance)(nil) _ GraphNodeTargetable = (*NodeAbstractResourceInstance)(nil) - _ GraphNodeOverridable = (*NodeAbstractResourceInstance)(nil) _ dag.GraphNodeDotter = (*NodeAbstractResourceInstance)(nil) ) @@ -213,14 +212,7 @@ func (n *NodeAbstractResource) References() []*addrs.Reference { func (n *NodeAbstractResource) ImportReferences() []*addrs.Reference { var result []*addrs.Reference for _, importTarget := range n.importTargets { - // legacy import won't have any config - if importTarget.Config == nil { - continue - } - - refs, _ := lang.ReferencesInExpr(addrs.ParseRef, importTarget.Config.ID) - result = append(result, refs...) - refs, _ = lang.ReferencesInExpr(addrs.ParseRef, importTarget.Config.ForEach) + refs, _ := lang.ReferencesInExpr(addrs.ParseRef, importTarget.ID) result = append(result, refs...) } return result diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index ad4cee0035aa..f542bc44e5ab 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -16,7 +16,6 @@ import ( "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/instances" - "github.com/hashicorp/terraform/internal/moduletest/mocking" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/objchange" "github.com/hashicorp/terraform/internal/providers" @@ -44,9 +43,6 @@ type NodeAbstractResourceInstance struct { // During import we may generate configuration for a resource, which needs // to be stored in the final change. generatedConfigHCL string - - // override is set by the graph itself, just before this node executes. - override *configs.Override } // NewNodeAbstractResourceInstance creates an abstract resource instance graph @@ -139,16 +135,6 @@ func (n *NodeAbstractResourceInstance) AttachResourceState(s *states.Resource) { n.storedProviderConfig = s.ProviderConfig } -// GraphNodeOverridable -func (n *NodeAbstractResourceInstance) ConfigProvider() addrs.AbsProviderConfig { - return n.ResolvedProvider -} - -// GraphNodeOverridable -func (n *NodeAbstractResourceInstance) SetOverride(override *configs.Override) { - n.override = override -} - // readDiff returns the planned change for a particular resource instance // object. func (n *NodeAbstractResourceInstance) readDiff(ctx EvalContext, providerSchema providers.ProviderSchema) (*plans.ResourceInstanceChange, error) { @@ -413,50 +399,39 @@ func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState return plan, diags } - var resp providers.PlanResourceChangeResponse - if n.override != nil { - // If we have an overridden value from the test framework, that means - // this value was created without consulting the provider previously. - // We can just set the planned state to deleted without consulting the - // provider. - resp = providers.PlanResourceChangeResponse{ - PlannedState: nullVal, - } - } else { - // Allow the provider to check the destroy plan, and insert any - // necessary private data. - resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{ - TypeName: n.Addr.Resource.Resource.Type, - Config: nullVal, - PriorState: unmarkedPriorVal, - ProposedNewState: nullVal, - PriorPrivate: currentState.Private, - ProviderMeta: metaConfigVal, - }) + // Allow the provider to check the destroy plan, and insert any necessary + // private data. + resp := provider.PlanResourceChange(providers.PlanResourceChangeRequest{ + TypeName: n.Addr.Resource.Resource.Type, + Config: nullVal, + PriorState: unmarkedPriorVal, + ProposedNewState: nullVal, + PriorPrivate: currentState.Private, + ProviderMeta: metaConfigVal, + }) - // We may not have a config for all destroys, but we want to reference - // it in the diagnostics if we do. - if n.Config != nil { - resp.Diagnostics = resp.Diagnostics.InConfigBody(n.Config.Config, n.Addr.String()) - } - diags = diags.Append(resp.Diagnostics) - if diags.HasErrors() { - return plan, diags - } + // We may not have a config for all destroys, but we want to reference it in + // the diagnostics if we do. + if n.Config != nil { + resp.Diagnostics = resp.Diagnostics.InConfigBody(n.Config.Config, n.Addr.String()) + } + diags = diags.Append(resp.Diagnostics) + if diags.HasErrors() { + return plan, diags + } - // Check that the provider returned a null value here, since that is the - // only valid value for a destroy plan. - if !resp.PlannedState.IsNull() { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Provider produced invalid plan", - fmt.Sprintf( - "Provider %q planned a non-null destroy value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ResolvedProvider.Provider, n.Addr), - ), - ) - return plan, diags - } + // Check that the provider returned a null value here, since that is the + // only valid value for a destroy plan. + if !resp.PlannedState.IsNull() { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Provider produced invalid plan", + fmt.Sprintf( + "Provider %q planned a non-null destroy value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", + n.ResolvedProvider.Provider, n.Addr), + ), + ) + return plan, diags } // Plan is always the same for a destroy. @@ -588,21 +563,14 @@ func (n *NodeAbstractResourceInstance) refresh(ctx EvalContext, deposedKey state priorVal, priorPaths = priorVal.UnmarkDeepWithPaths() } - var resp providers.ReadResourceResponse - if n.override != nil { - // If we have an override set for this resource, we don't want to talk - // to the provider so we'll just return whatever was in state. - resp = providers.ReadResourceResponse{ - NewState: priorVal, - } - } else { - resp = provider.ReadResource(providers.ReadResourceRequest{ - TypeName: n.Addr.Resource.Resource.Type, - PriorState: priorVal, - Private: state.Private, - ProviderMeta: metaConfigVal, - }) + providerReq := providers.ReadResourceRequest{ + TypeName: n.Addr.Resource.Resource.Type, + PriorState: priorVal, + Private: state.Private, + ProviderMeta: metaConfigVal, } + + resp := provider.ReadResource(providerReq) if n.Config != nil { resp.Diagnostics = resp.Diagnostics.InConfigBody(n.Config.Config, n.Addr.String()) } @@ -830,36 +798,14 @@ func (n *NodeAbstractResourceInstance) plan( return nil, nil, keyData, diags } - var resp providers.PlanResourceChangeResponse - if n.override != nil { - // Then we have an override to apply for this change. But, overrides - // only matter when we are creating a resource for the first time as we - // only apply computed values. - if priorVal.IsNull() { - // Then we are actually creating something, so let's populate the - // computed values from our override value. - override, overrideDiags := mocking.PlanComputedValuesForResource(proposedNewVal, schema) - resp = providers.PlanResourceChangeResponse{ - PlannedState: override, - Diagnostics: overrideDiags, - } - } else { - // This is an update operation, and we don't actually have any - // computed values that need to be applied. - resp = providers.PlanResourceChangeResponse{ - PlannedState: proposedNewVal, - } - } - } else { - resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{ - TypeName: n.Addr.Resource.Resource.Type, - Config: unmarkedConfigVal, - PriorState: unmarkedPriorVal, - ProposedNewState: proposedNewVal, - PriorPrivate: priorPrivate, - ProviderMeta: metaConfigVal, - }) - } + resp := provider.PlanResourceChange(providers.PlanResourceChangeRequest{ + TypeName: n.Addr.Resource.Resource.Type, + Config: unmarkedConfigVal, + PriorState: unmarkedPriorVal, + ProposedNewState: proposedNewVal, + PriorPrivate: priorPrivate, + ProviderMeta: metaConfigVal, + }) diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config, n.Addr.String())) if diags.HasErrors() { return nil, nil, keyData, diags @@ -1086,24 +1032,14 @@ func (n *NodeAbstractResourceInstance) plan( // create a new proposed value from the null state and the config proposedNewVal = objchange.ProposedNew(schema, nullPriorVal, unmarkedConfigVal) - if n.override != nil { - // In this case, we are always creating the resource so we don't - // do any validation, and just call out to the mocking library. - override, overrideDiags := mocking.PlanComputedValuesForResource(proposedNewVal, schema) - resp = providers.PlanResourceChangeResponse{ - PlannedState: override, - Diagnostics: overrideDiags, - } - } else { - resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{ - TypeName: n.Addr.Resource.Resource.Type, - Config: unmarkedConfigVal, - PriorState: nullPriorVal, - ProposedNewState: proposedNewVal, - PriorPrivate: plannedPrivate, - ProviderMeta: metaConfigVal, - }) - } + resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{ + TypeName: n.Addr.Resource.Resource.Type, + Config: unmarkedConfigVal, + PriorState: nullPriorVal, + ProposedNewState: proposedNewVal, + PriorPrivate: plannedPrivate, + ProviderMeta: metaConfigVal, + }) // We need to tread carefully here, since if there are any warnings // in here they probably also came out of our previous call to // PlanResourceChange above, and so we don't want to repeat them. @@ -1511,23 +1447,11 @@ func (n *NodeAbstractResourceInstance) readDataSource(ctx EvalContext, configVal return newVal, diags } - var resp providers.ReadDataSourceResponse - if n.override != nil { - override, overrideDiags := mocking.ComputedValuesForDataSource(configVal, mocking.ReplacementValue{ - Value: n.override.Values, - Range: n.override.ValuesRange, - }, schema) - resp = providers.ReadDataSourceResponse{ - State: override, - Diagnostics: overrideDiags, - } - } else { - resp = provider.ReadDataSource(providers.ReadDataSourceRequest{ - TypeName: n.Addr.ContainingResource().Resource.Type, - Config: configVal, - ProviderMeta: metaConfigVal, - }) - } + resp := provider.ReadDataSource(providers.ReadDataSourceRequest{ + TypeName: n.Addr.ContainingResource().Resource.Type, + Config: configVal, + ProviderMeta: metaConfigVal, + }) diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config, n.Addr.String())) if diags.HasErrors() { return newVal, diags @@ -2369,35 +2293,14 @@ func (n *NodeAbstractResourceInstance) apply( return newState, diags } - var resp providers.ApplyResourceChangeResponse - if n.override != nil { - // As with the planning stage, we only need to worry about computed - // values the first time the object is created. Otherwise, we're happy - // to just apply whatever the user asked for. - if change.Action == plans.Create { - override, overrideDiags := mocking.ApplyComputedValuesForResource(unmarkedAfter, mocking.ReplacementValue{ - Value: n.override.Values, - Range: n.override.ValuesRange, - }, schema) - resp = providers.ApplyResourceChangeResponse{ - NewState: override, - Diagnostics: overrideDiags, - } - } else { - resp = providers.ApplyResourceChangeResponse{ - NewState: unmarkedAfter, - } - } - } else { - resp = provider.ApplyResourceChange(providers.ApplyResourceChangeRequest{ - TypeName: n.Addr.Resource.Resource.Type, - PriorState: unmarkedBefore, - Config: unmarkedConfigVal, - PlannedState: unmarkedAfter, - PlannedPrivate: change.Private, - ProviderMeta: metaConfigVal, - }) - } + resp := provider.ApplyResourceChange(providers.ApplyResourceChangeRequest{ + TypeName: n.Addr.Resource.Resource.Type, + PriorState: unmarkedBefore, + Config: unmarkedConfigVal, + PlannedState: unmarkedAfter, + PlannedPrivate: change.Private, + ProviderMeta: metaConfigVal, + }) applyDiags := resp.Diagnostics if applyConfig != nil { applyDiags = applyDiags.InConfigBody(applyConfig.Config, n.Addr.String()) diff --git a/internal/terraform/node_resource_import.go b/internal/terraform/node_resource_import.go index 608cbabee2f5..a081d4df7e49 100644 --- a/internal/terraform/node_resource_import.go +++ b/internal/terraform/node_resource_import.go @@ -118,7 +118,7 @@ func (n *graphNodeImportState) Execute(ctx EvalContext, op walkOperation) (diags // and state inserts we need to do for our import state. Since they're new // resources they don't depend on anything else and refreshes are isolated // so this is nearly a perfect use case for dynamic expand. -func (n *graphNodeImportState) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.Diagnostics) { +func (n *graphNodeImportState) DynamicExpand(ctx EvalContext) (*Graph, error) { var diags tfdiags.Diagnostics g := &Graph{Path: ctx.Path()} @@ -164,7 +164,7 @@ func (n *graphNodeImportState) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.D } if diags.HasErrors() { // Bail out early, then. - return nil, diags + return nil, diags.Err() } // For each of the states, we add a node to handle the refresh/add to state. @@ -182,7 +182,7 @@ func (n *graphNodeImportState) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.D addRootNodeToGraph(g) // Done! - return g, diags + return g, diags.Err() } // graphNodeImportStateSub is the sub-node of graphNodeImportState diff --git a/internal/terraform/node_resource_plan.go b/internal/terraform/node_resource_plan.go index d312719505bb..81ba8f46c6e4 100644 --- a/internal/terraform/node_resource_plan.go +++ b/internal/terraform/node_resource_plan.go @@ -5,10 +5,8 @@ package terraform import ( "fmt" - "log" "strings" - "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/dag" "github.com/hashicorp/terraform/internal/states" @@ -48,13 +46,9 @@ type nodeExpandPlannableResource struct { // union of multiple groups of dependencies. dependencies []addrs.ConfigResource - // these are a record of all the addresses used in expansion so they can be - // validated as a complete set. While the type is guaranteed to be - // addrs.AbsResourceInstance for all these, we use addrs.Checkable because - // the expandedInstnaces need to be passed to the check state to register - // the instances for checks. - expandedImports addrs.Set[addrs.Checkable] - expandedInstances addrs.Set[addrs.Checkable] + // legacyImportMode is set if the graph is being constructed following an + // invocation of the legacy "terraform import" CLI command. + legacyImportMode bool } var ( @@ -102,7 +96,7 @@ func (n *nodeExpandPlannableResource) ModifyCreateBeforeDestroy(v bool) error { return nil } -func (n *nodeExpandPlannableResource) DynamicExpand(ctx EvalContext) (*Graph, tfdiags.Diagnostics) { +func (n *nodeExpandPlannableResource) DynamicExpand(ctx EvalContext) (*Graph, error) { var g Graph expander := ctx.InstanceExpander() @@ -167,50 +161,28 @@ func (n *nodeExpandPlannableResource) DynamicExpand(ctx EvalContext) (*Graph, tf // so that we can inform the checks subsystem of which instances it should // be expecting check results for, below. var diags tfdiags.Diagnostics - n.expandedImports = addrs.MakeSet[addrs.Checkable]() - n.expandedInstances = addrs.MakeSet[addrs.Checkable]() + instAddrs := addrs.MakeSet[addrs.Checkable]() for _, module := range moduleInstances { resAddr := n.Addr.Resource.Absolute(module) - err := n.expandResourceInstances(ctx, resAddr, &g) + err := n.expandResourceInstances(ctx, resAddr, &g, instAddrs) diags = diags.Append(err) } if diags.HasErrors() { - return nil, diags + return nil, diags.ErrWithWarnings() } - diags = diags.Append(n.validateExpandedImportTargets()) - // If this is a resource that participates in custom condition checks // (i.e. it has preconditions or postconditions) then the check state // wants to know the addresses of the checkable objects so that it can // treat them as unknown status if we encounter an error before actually // visiting the checks. if checkState := ctx.Checks(); checkState.ConfigHasChecks(n.NodeAbstractResource.Addr) { - checkState.ReportCheckableObjects(n.NodeAbstractResource.Addr, n.expandedInstances) + checkState.ReportCheckableObjects(n.NodeAbstractResource.Addr, instAddrs) } addRootNodeToGraph(&g) - return &g, diags -} - -// validateExpandedImportTargets checks that all expanded imports correspond to -// a configured instance. -func (n *nodeExpandPlannableResource) validateExpandedImportTargets() tfdiags.Diagnostics { - var diags tfdiags.Diagnostics - - for _, addr := range n.expandedImports { - if !n.expandedInstances.Has(addr) { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Configuration for import target does not exist", - fmt.Sprintf("The configuration for the given import %s does not exist. All target instances must have an associated configuration to be imported.", addr), - )) - return diags - } - } - - return diags + return &g, diags.ErrWithWarnings() } // expandResourceInstances calculates the dynamic expansion for the resource @@ -219,13 +191,13 @@ func (n *nodeExpandPlannableResource) validateExpandedImportTargets() tfdiags.Di // It has several side-effects: // - Adds a node to Graph g for each leaf resource instance it discovers, whether present or orphaned. // - Registers the expansion of the resource in the "expander" object embedded inside EvalContext ctx. -// - Adds each present (non-orphaned) resource instance address to checkableAddrs (guaranteed to always be addrs.AbsResourceInstance, despite being declared as addrs.Checkable). +// - Adds each present (non-orphaned) resource instance address to instAddrs (guaranteed to always be addrs.AbsResourceInstance, despite being declared as addrs.Checkable). // // After calling this for each of the module instances the resource appears // within, the caller must register the final superset instAddrs with the // checks subsystem so that it knows the fully expanded set of checkable // object instances for this resource instance. -func (n *nodeExpandPlannableResource) expandResourceInstances(globalCtx EvalContext, resAddr addrs.AbsResource, g *Graph) tfdiags.Diagnostics { +func (n *nodeExpandPlannableResource) expandResourceInstances(globalCtx EvalContext, resAddr addrs.AbsResource, g *Graph, instAddrs addrs.Set[addrs.Checkable]) error { var diags tfdiags.Diagnostics // The rest of our work here needs to know which module instance it's @@ -238,7 +210,7 @@ func (n *nodeExpandPlannableResource) expandResourceInstances(globalCtx EvalCont moreDiags := n.writeResourceState(moduleCtx, resAddr) diags = diags.Append(moreDiags) if moreDiags.HasErrors() { - return diags + return diags.ErrWithWarnings() } // Before we expand our resource into potentially many resource instances, @@ -309,7 +281,10 @@ func (n *nodeExpandPlannableResource) expandResourceInstances(globalCtx EvalCont // If this resource is participating in the "checks" mechanism then our // caller will need to know all of our expanded instance addresses as // checkable object instances. - n.expandedInstances.Add(addr) + // (NOTE: instAddrs probably already has other instance addresses in it + // from earlier calls to this function with different resource addresses, + // because its purpose is to aggregate them all together into a single set.) + instAddrs.Add(addr) } // Our graph builder mechanism expects to always be constructing new @@ -317,112 +292,19 @@ func (n *nodeExpandPlannableResource) expandResourceInstances(globalCtx EvalCont // construct a subgraph just for this individual modules's instances and // then we'll steal all of its nodes and edges to incorporate into our // main graph which contains all of the resource instances together. - instG, instDiags := n.resourceInstanceSubgraph(moduleCtx, resAddr, instanceAddrs) - if instDiags.HasErrors() { - diags = diags.Append(instDiags) - return diags + instG, err := n.resourceInstanceSubgraph(moduleCtx, resAddr, instanceAddrs) + if err != nil { + diags = diags.Append(err) + return diags.ErrWithWarnings() } g.Subsume(&instG.AcyclicGraph.Graph) - return diags -} - -// Import blocks are expanded in conjunction with their associated resource block. -func (n nodeExpandPlannableResource) expandResourceImports(ctx EvalContext, addr addrs.AbsResource, instanceAddrs []addrs.AbsResourceInstance) (addrs.Map[addrs.AbsResourceInstance, string], tfdiags.Diagnostics) { - // Imports maps the target address to an import ID. - imports := addrs.MakeMap[addrs.AbsResourceInstance, string]() - var diags tfdiags.Diagnostics - - if len(n.importTargets) == 0 { - return imports, diags - } - - // Import blocks are only valid within the root module, and must be - // evaluated within that context - ctx = ctx.WithPath(addrs.RootModuleInstance) - - for _, imp := range n.importTargets { - if imp.Config == nil { - // if we have a legacy addr, it was supplied on the commandline so - // there is nothing to expand - if !imp.LegacyAddr.Equal(addrs.AbsResourceInstance{}) { - imports.Put(imp.LegacyAddr, imp.IDString) - n.expandedImports.Add(imp.LegacyAddr) - return imports, diags - } - - // legacy import tests may have no configuration - log.Printf("[WARN] no configuration for import target %#v", imp) - continue - } - - if imp.Config.ForEach == nil { - importID, evalDiags := evaluateImportIdExpression(imp.Config.ID, ctx, EvalDataForNoInstanceKey) - diags = diags.Append(evalDiags) - if diags.HasErrors() { - return imports, diags - } - - traversal, hds := hcl.AbsTraversalForExpr(imp.Config.To) - diags = diags.Append(hds) - to, tds := addrs.ParseAbsResourceInstance(traversal) - diags = diags.Append(tds) - if diags.HasErrors() { - return imports, diags - } - - imports.Put(to, importID) - n.expandedImports.Add(to) - - log.Printf("[TRACE] expandResourceImports: found single import target %s", to) - continue - } - - forEachData, forEachDiags := newForEachEvaluator(imp.Config.ForEach, ctx).ImportValues() - diags = diags.Append(forEachDiags) - if forEachDiags.HasErrors() { - return imports, diags - } - - for _, keyData := range forEachData { - res, evalDiags := evalImportToExpression(imp.Config.To, keyData) - diags = diags.Append(evalDiags) - if diags.HasErrors() { - return imports, diags - } - - importID, evalDiags := evaluateImportIdExpression(imp.Config.ID, ctx, keyData) - diags = diags.Append(evalDiags) - if diags.HasErrors() { - return imports, diags - } - - imports.Put(res, importID) - n.expandedImports.Add(res) - log.Printf("[TRACE] expandResourceImports: expanded import target %s", res) - } - } - - // filter out any import which already exist in state - state := ctx.State() - for _, el := range imports.Elements() { - if state.ResourceInstance(el.Key) != nil { - log.Printf("[DEBUG] expandResourceImports: skipping import address %s already in state", el.Key) - imports.Remove(el.Key) - } - } - - return imports, diags + return diags.ErrWithWarnings() } -func (n *nodeExpandPlannableResource) resourceInstanceSubgraph(ctx EvalContext, addr addrs.AbsResource, instanceAddrs []addrs.AbsResourceInstance) (*Graph, tfdiags.Diagnostics) { +func (n *nodeExpandPlannableResource) resourceInstanceSubgraph(ctx EvalContext, addr addrs.AbsResource, instanceAddrs []addrs.AbsResourceInstance) (*Graph, error) { var diags tfdiags.Diagnostics - // Now that the resources are all expanded, we can expand the imports for - // this resource. - imports, importDiags := n.expandResourceImports(ctx, addr, instanceAddrs) - diags = diags.Append(importDiags) - // Our graph transformers require access to the full state, so we'll // temporarily lock it while we work on this. state := ctx.State().Lock() @@ -434,12 +316,25 @@ func (n *nodeExpandPlannableResource) resourceInstanceSubgraph(ctx EvalContext, // If we're in legacy import mode (the import CLI command), we only need // to return the import node, not a plannable resource node. - for _, importTarget := range n.importTargets { - if importTarget.LegacyAddr.Equal(a.Addr) { - return &graphNodeImportState{ - Addr: importTarget.LegacyAddr, - ID: imports.Get(importTarget.LegacyAddr), - ResolvedProvider: n.ResolvedProvider, + if n.legacyImportMode { + for _, importTarget := range n.importTargets { + if importTarget.Addr.Equal(a.Addr) { + + // The import ID was supplied as a string on the command + // line and made into a synthetic HCL expression. + importId, diags := evaluateImportIdExpression(importTarget.ID, ctx) + if diags.HasErrors() { + // This should be impossible, because the import command + // arg parsing builds the synth expression from a + // non-null string. + panic(fmt.Sprintf("Invalid import id: %s. This is a bug in Terraform; please report it!", diags.Err())) + } + + return &graphNodeImportState{ + Addr: importTarget.Addr, + ID: importId, + ResolvedProvider: n.ResolvedProvider, + } } } } @@ -467,10 +362,15 @@ func (n *nodeExpandPlannableResource) resourceInstanceSubgraph(ctx EvalContext, forceReplace: n.forceReplace, } - importID, ok := imports.GetOk(a.Addr) - if ok { - m.importTarget = ImportTarget{ - IDString: importID, + for _, importTarget := range n.importTargets { + if importTarget.Addr.Equal(a.Addr) { + // If we get here, we're definitely not in legacy import mode, + // so go ahead and plan the resource changes including import. + m.importTarget = ImportTarget{ + ID: importTarget.ID, + Addr: importTarget.Addr, + Config: importTarget.Config, + } } } @@ -529,8 +429,6 @@ func (n *nodeExpandPlannableResource) resourceInstanceSubgraph(ctx EvalContext, Steps: steps, Name: "nodeExpandPlannableResource", } - graph, graphDiags := b.Build(addr.Module) - diags = diags.Append(graphDiags) - - return graph, diags + graph, diags := b.Build(addr.Module) + return graph, diags.ErrWithWarnings() } diff --git a/internal/terraform/node_resource_plan_instance.go b/internal/terraform/node_resource_plan_instance.go index a4bbaa53594e..ecdd5a5d64eb 100644 --- a/internal/terraform/node_resource_plan_instance.go +++ b/internal/terraform/node_resource_plan_instance.go @@ -159,8 +159,41 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext) } } - importing := n.importTarget.IDString != "" - importId := n.importTarget.IDString + importing := n.importTarget.ID != nil + var importId string + + if importing { + var evalDiags tfdiags.Diagnostics + + importId, evalDiags = evaluateImportIdExpression(n.importTarget.ID, ctx) + if evalDiags.HasErrors() { + diags = diags.Append(evalDiags) + return diags + } + } + + if importing && n.Config == nil && len(n.generateConfigPath) == 0 { + // Then the user wrote an import target to a target that didn't exist. + if n.Addr.Module.IsRoot() { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Import block target does not exist", + Detail: "The target for the given import block does not exist. If you wish to automatically generate config for this resource, use the -generate-config-out option within terraform plan. Otherwise, make sure the target resource exists within your configuration. For example:\n\n terraform plan -generate-config-out=generated.tf", + Subject: n.importTarget.Config.DeclRange.Ptr(), + }) + } else { + // You can't generate config for a resource that is inside a + // module, so we will present a different error message for + // this case. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Import block target does not exist", + Detail: "The target for the given import block does not exist. The specified target is within a module, and must be defined as a resource within that module before anything can be imported.", + Subject: n.importTarget.Config.DeclRange.Ptr(), + }) + } + return diags + } // If the resource is to be imported, we now ask the provider for an Import // and a Refresh, and save the resulting state to instanceRefreshState. @@ -472,7 +505,7 @@ func (n *NodePlannableResourceInstance) importState(ctx EvalContext, addr addrs. })) if imported[0].TypeName == "" { - diags = diags.Append(fmt.Errorf("import of %s didn't set type", n.Addr.String())) + diags = diags.Append(fmt.Errorf("import of %s didn't set type", n.importTarget.Addr.String())) return nil, diags } @@ -484,14 +517,14 @@ func (n *NodePlannableResourceInstance) importState(ctx EvalContext, addr addrs. "Import returned null resource", fmt.Sprintf("While attempting to import with ID %s, the provider"+ "returned an instance with no state.", - n.importTarget.IDString, + n.importTarget.ID, ), )) } // refresh riNode := &NodeAbstractResourceInstance{ - Addr: n.Addr, + Addr: n.importTarget.Addr, NodeAbstractResource: NodeAbstractResource{ ResolvedProvider: n.ResolvedProvider, }, @@ -515,7 +548,7 @@ func (n *NodePlannableResourceInstance) importState(ctx EvalContext, addr addrs. "is correct and that it is associated with the provider's "+ "configured region or endpoint, or use \"terraform apply\" to "+ "create a new remote object for this resource.", - n.Addr, + n.importTarget.Addr, ), )) return instanceRefreshState, diags diff --git a/internal/terraform/node_resource_validate.go b/internal/terraform/node_resource_validate.go index 253bcb0097b6..7b35d2688e6e 100644 --- a/internal/terraform/node_resource_validate.go +++ b/internal/terraform/node_resource_validate.go @@ -306,7 +306,7 @@ func (n *NodeValidatableResource) validateResource(ctx EvalContext) tfdiags.Diag } // Evaluate the for_each expression here so we can expose the diagnostics - forEachDiags := newForEachEvaluator(n.Config.ForEach, ctx).ValidateResourceValue() + forEachDiags := validateForEach(ctx, n.Config.ForEach) diags = diags.Append(forEachDiags) } @@ -559,6 +559,21 @@ func validateCount(ctx EvalContext, expr hcl.Expression) (diags tfdiags.Diagnost return diags } +func validateForEach(ctx EvalContext, expr hcl.Expression) (diags tfdiags.Diagnostics) { + val, forEachDiags := evaluateForEachExpressionValue(expr, ctx, true) + // If the value isn't known then that's the best we can do for now, but + // we'll check more thoroughly during the plan walk + if !val.IsKnown() { + return diags + } + + if forEachDiags.HasErrors() { + diags = diags.Append(forEachDiags) + } + + return diags +} + func validateDependsOn(ctx EvalContext, dependsOn []hcl.Traversal) (diags tfdiags.Diagnostics) { for _, traversal := range dependsOn { ref, refDiags := addrs.ParseRef(traversal) diff --git a/internal/terraform/terraform_test.go b/internal/terraform/terraform_test.go index c4b578798493..0087679b39bb 100644 --- a/internal/terraform/terraform_test.go +++ b/internal/terraform/terraform_test.go @@ -241,14 +241,6 @@ func mustReference(s string) *addrs.Reference { return p } -func mustModuleInstance(s string) addrs.ModuleInstance { - p, diags := addrs.ParseModuleInstanceStr(s) - if diags.HasErrors() { - panic(diags.Err()) - } - return p -} - // HookRecordApplyOrder is a test hook that records the order of applies // by recording the PreApply event. type HookRecordApplyOrder struct { diff --git a/internal/terraform/test_context.go b/internal/terraform/test_context.go index b9249892de36..ff877b5fdf83 100644 --- a/internal/terraform/test_context.go +++ b/internal/terraform/test_context.go @@ -100,11 +100,6 @@ func (ctx *TestContext) Evaluate(priorContexts map[string]*TestContext) { var dataDiags tfdiags.Diagnostics alternateStates := make(map[string]*evaluationStateData) for name, priorContext := range priorContexts { - if priorContext == nil { - // Skip contexts that haven't been executed yet. - continue - } - var moreDiags tfdiags.Diagnostics alternateStates[name], moreDiags = priorContext.evaluationStateData(nil) dataDiags = dataDiags.Append(moreDiags) diff --git a/internal/terraform/transform_config.go b/internal/terraform/transform_config.go index ead1adb53de6..2afa0b9e3026 100644 --- a/internal/terraform/transform_config.go +++ b/internal/terraform/transform_config.go @@ -7,11 +7,9 @@ import ( "fmt" "log" - "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/dag" - "github.com/hashicorp/terraform/internal/tfdiags" ) // ConfigTransformer is a GraphTransformer that adds all the resources @@ -63,23 +61,23 @@ func (t *ConfigTransformer) Transform(g *Graph) error { } // Start the transformation process - return t.transform(g, t.Config) + return t.transform(g, t.Config, t.generateConfigPathForImportTargets) } -func (t *ConfigTransformer) transform(g *Graph, config *configs.Config) error { +func (t *ConfigTransformer) transform(g *Graph, config *configs.Config, generateConfigPath string) error { // If no config, do nothing if config == nil { return nil } // Add our resources - if err := t.transformSingle(g, config); err != nil { + if err := t.transformSingle(g, config, generateConfigPath); err != nil { return err } // Transform all the children without generating config. for _, c := range config.Children { - if err := t.transform(g, c); err != nil { + if err := t.transform(g, c, ""); err != nil { return err } } @@ -87,7 +85,7 @@ func (t *ConfigTransformer) transform(g *Graph, config *configs.Config) error { return nil } -func (t *ConfigTransformer) transformSingle(g *Graph, config *configs.Config) error { +func (t *ConfigTransformer) transformSingle(g *Graph, config *configs.Config, generateConfigPath string) error { path := config.Path module := config.Module log.Printf("[TRACE] ConfigTransformer: Starting for path: %v", path) @@ -104,15 +102,8 @@ func (t *ConfigTransformer) transformSingle(g *Graph, config *configs.Config) er // Only include import targets that are targeting the current module. var importTargets []*ImportTarget for _, target := range t.importTargets { - switch { - case target.Config == nil: - if target.LegacyAddr.Module.Module().Equal(config.Path) { - importTargets = append(importTargets, target) - } - default: - if target.Config.ToResource.Module.Equal(config.Path) { - importTargets = append(importTargets, target) - } + if targetModule := target.Addr.Module.Module(); targetModule.Equal(config.Path) { + importTargets = append(importTargets, target) } } @@ -131,12 +122,7 @@ func (t *ConfigTransformer) transformSingle(g *Graph, config *configs.Config) er var matchedIndices []int for ix, i := range importTargets { - if i.LegacyAddr.ConfigResource().Equal(configAddr) { - matchedIndices = append(matchedIndices, ix) - imports = append(imports, i) - - } - if i.Config != nil && i.Config.ToResource.Equal(configAddr) { + if target := i.Addr.ContainingResource().Config(); target.Equal(configAddr) { // This import target has been claimed by an actual resource, // let's make a note of this to remove it from the targets. matchedIndices = append(matchedIndices, ix) @@ -171,76 +157,39 @@ func (t *ConfigTransformer) transformSingle(g *Graph, config *configs.Config) er g.Add(node) } - // If any import targets were not claimed by resources and we are - // generating configuration, then let's add them into the graph now. - - // TODO: use diagnostics to collect detailed errors for now, even though we - // can only return an error from here. This gives the user more immediate - // feedback, rather than waiting an unknown amount of time for the plan to - // fail. - var diags tfdiags.Diagnostics - + // If any import targets were not claimed by resources, then let's add them + // into the graph now. + // + // We actually know that if any of the resources aren't claimed and + // generateConfig is false, then we have a problem. But, we can't raise a + // nice error message from this function. + // + // We'll add the nodes that we know will fail, and catch them again later + // in the processing when we are in a position to raise a much more helpful + // error message. + // + // TODO: We could actually catch and process these kind of problems earlier, + // this is something that could be done during the Validate process. for _, i := range importTargets { - if path.IsRoot() { - // If we have a single instance import target in the root module, we - // can suggest config generation. - // We do need to make sure there are no dynamic expressions here - // and we can parse this at all. - var toDiags tfdiags.Diagnostics - traversal, hd := hcl.AbsTraversalForExpr(i.Config.To) - toDiags = toDiags.Append(hd) - to, td := addrs.ParseAbsResourceInstance(traversal) - toDiags = toDiags.Append(td) - canGenerate := !toDiags.HasErrors() && to.Resource.Key == addrs.NoKey - - if t.generateConfigPathForImportTargets != "" && canGenerate { - log.Printf("[DEBUG] ConfigTransformer: adding config generation node for %s", i.Config.ToResource) - - // TODO: if config generation is ever supported for for_each - // resources, this will add multiple nodes for the same - // resource - abstract := &NodeAbstractResource{ - Addr: i.Config.ToResource, - importTargets: []*ImportTarget{i}, - generateConfigPath: t.generateConfigPathForImportTargets, - } - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - - g.Add(node) - continue - } + // The case in which an unmatched import block targets an expanded + // resource instance can error here. Others can error later. + if i.Addr.Resource.Key != addrs.NoKey { + return fmt.Errorf("Config generation for count and for_each resources not supported.\n\nYour configuration contains an import block with a \"to\" address of %s. This resource instance does not exist in configuration.\n\nIf you intended to target a resource that exists in configuration, please double-check the address. Otherwise, please remove this import block or re-run the plan without the -generate-config-out flag to ignore the import block.", i.Addr) + } - if t.generateConfigPathForImportTargets != "" && !canGenerate { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Cannot generate configuration", - Detail: "The given import block is not compatible with config generation. The -generate-config-out option cannot be used with import blocks which use for_each, or resources which use for_each or count.", - Subject: i.Config.To.Range().Ptr(), - }) - continue - } + abstract := &NodeAbstractResource{ + Addr: i.Addr.ConfigResource(), + importTargets: []*ImportTarget{i}, + generateConfigPath: generateConfigPath, + } - if canGenerate { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Configuration for import target does not exist", - Detail: fmt.Sprintf("The configuration for the given import target %s does not exist. If you wish to automatically generate config for this resource, use the -generate-config-out option within terraform plan. Otherwise, make sure the target resource exists within your configuration. For example:\n\n terraform plan -generate-config-out=generated.tf", to), - Subject: i.Config.To.Range().Ptr(), - }) - continue - } + var node dag.Vertex = abstract + if f := t.Concrete; f != nil { + node = f(abstract) } - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Configuration for import target does not exist", - Detail: fmt.Sprintf("The configuration for the given import target %s does not exist. All target instances must have an associated configuration to be imported.", i.Config.ToResource), - Subject: i.Config.To.Range().Ptr(), - }) + g.Add(node) } - return diags.Err() + return nil } diff --git a/internal/terraform/transform_expand.go b/internal/terraform/transform_expand.go index 38d7ababc11f..45bcf44b7088 100644 --- a/internal/terraform/transform_expand.go +++ b/internal/terraform/transform_expand.go @@ -3,10 +3,6 @@ package terraform -import ( - "github.com/hashicorp/terraform/internal/tfdiags" -) - // GraphNodeDynamicExpandable is an interface that nodes can implement // to signal that they can be expanded at eval-time (hence dynamic). // These nodes are given the eval context and are expected to return @@ -20,5 +16,5 @@ type GraphNodeDynamicExpandable interface { // of calling ErrWithWarnings on a tfdiags.Diagnostics value instead, // in which case the caller will unwrap it and gather the individual // diagnostics. - DynamicExpand(EvalContext) (*Graph, tfdiags.Diagnostics) + DynamicExpand(EvalContext) (*Graph, error) } diff --git a/scripts/build.sh b/scripts/build.sh index 8ac354ea3e9f..61a6aed0af60 100755 --- a/scripts/build.sh +++ b/scripts/build.sh @@ -11,7 +11,7 @@ while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )" # Change into that directory -cd "$DIR" || exit +cd "$DIR" # Determine the arch/os combos we're building for XC_ARCH=${XC_ARCH:-"386 amd64 arm"} diff --git a/scripts/staticcheck.sh b/scripts/staticcheck.sh index d76518ad11cc..91f4742a11dd 100755 --- a/scripts/staticcheck.sh +++ b/scripts/staticcheck.sh @@ -13,6 +13,9 @@ skip=$skip"|internal/planproto|internal/tfplugin5|internal/tfplugin6" packages=$(go list ./... | egrep -v ${skip}) -# Note that we globally disable some checks. The list is controlled by the -# top-level staticcheck.conf file in this repo. -go run honnef.co/go/tools/cmd/staticcheck ${packages} +# We are skipping style-related checks, since terraform intentionally breaks +# some of these. The goal here is to find issues that reduce code clarity, or +# may result in bugs. We also disable fucntion deprecation checks (SA1019) +# because our policy is to update deprecated calls locally while making other +# nearby changes, rather than to make cross-cutting changes to update them all. +go run honnef.co/go/tools/cmd/staticcheck -checks 'all,-SA1019,-ST*' ${packages} diff --git a/staticcheck.conf b/staticcheck.conf deleted file mode 100644 index 20851da80728..000000000000 --- a/staticcheck.conf +++ /dev/null @@ -1,7 +0,0 @@ -# Our goal in using staticcheck is to find issues that reduce code clarity, or -# may result in bugs. We are skipping style-related checks, since terraform -# intentionally breaks some of these. We also disable function deprecation -# checks (SA1019) because our policy is to update deprecated calls locally while -# making other nearby changes, rather than to make cross-cutting changes to -# update them all. -checks = ["all", "-SA1019", "-ST*"] diff --git a/version/VERSION b/version/VERSION index de023c91b16b..9edc58bb1dd8 100644 --- a/version/VERSION +++ b/version/VERSION @@ -1 +1 @@ -1.7.0-dev +1.6.4 diff --git a/website/docs/language/checks/index.mdx b/website/docs/language/checks/index.mdx index c0905da06f61..fe0c45b09a33 100644 --- a/website/docs/language/checks/index.mdx +++ b/website/docs/language/checks/index.mdx @@ -97,7 +97,7 @@ Preconditions are unique amongst the custom conditions in that they execute _bef You can often use postconditions interchangeably with check blocks to validate resources and data sources. -For example, you can [rewrite the above `check` block example](#syntax) to use a postcondition instead. The below code uses a `postcondition` block to validate that the Terraform website returns the expected status code of `200`. +For example, you can [rewrite the above `check` block example](#checks-syntax) to use a postcondition instead. The below code uses a `postcondition` block to validate that the Terraform website returns the expected status code of `200`. ```hcl data "http" "terraform_io" { diff --git a/website/docs/language/functions/nonsensitive.mdx b/website/docs/language/functions/nonsensitive.mdx index c25ba18432ab..7518eb2aa03f 100644 --- a/website/docs/language/functions/nonsensitive.mdx +++ b/website/docs/language/functions/nonsensitive.mdx @@ -73,8 +73,10 @@ due to an inappropriate call to `nonsensitive` in your module, that's a bug in your module and not a bug in Terraform itself. **Use this function sparingly and only with due care.** -`nonsensitive` will make no changes to values that aren't marked as sensitive, even though such a call may be redundant and potentially confusing. -Use `nonsensitive` only after careful consideration and with definite intent. +`nonsensitive` will return an error if you pass a value that isn't marked +as sensitive, because such a call would be redundant and potentially confusing +or misleading to a future maintainer of your module. Use `nonsensitive` only +after careful consideration and with definite intent. Consider including a comment adjacent to your call to explain to future maintainers what makes the usage safe and thus what invariants they must take diff --git a/website/docs/language/providers/requirements.mdx b/website/docs/language/providers/requirements.mdx index ff80125850f5..5d51e6e52e3d 100644 --- a/website/docs/language/providers/requirements.mdx +++ b/website/docs/language/providers/requirements.mdx @@ -119,10 +119,6 @@ follows: `[/]/` -Examples of valid provider source address formats include: -- `NAMESPACE/TYPE` -- `HOSTNAME/NAMESPACE/TYPE` - * **Hostname** (optional): The hostname of the Terraform registry that distributes the provider. If omitted, this defaults to `registry.terraform.io`, the hostname of diff --git a/website/docs/language/resources/syntax.mdx b/website/docs/language/resources/syntax.mdx index d423249071dc..6ff1053be8a0 100644 --- a/website/docs/language/resources/syntax.mdx +++ b/website/docs/language/resources/syntax.mdx @@ -15,16 +15,17 @@ Each resource block describes one or more infrastructure objects, such as virtual networks, compute instances, or higher-level components such as DNS records. -For information about how Terraform manages resources after applying a configuration, refer to +To see how Terraform manages resources when applying a configuration, see [Resource Behavior](/terraform/language/resources/behavior). ## Resource Syntax -A `resource` block declares a resource of a specific type -with a specific local name. Terraform uses the name when referring to the resource -in the same module, but it has no meaning outside that module's scope. +A "resource" block declares a resource of a specific type +with a specific local name. The name is used to refer to this resource +in the same Terraform module but has no meaning outside that module's scope. -In the following example, the `aws_instance` resource type is named `web`. The resource type and name must be unique within a module because they serve as an identifier for a given resource. +The resource type ("aws_instance") and name ("Web") together must be unique within a module because they +serve as an identifier for a given resource. ```hcl resource "aws_instance" "web" { @@ -43,7 +44,7 @@ contain only letters, digits, underscores, and dashes. Resource declarations can include more advanced features, such as single resource declarations that produce multiple similar remote objects, but only -a small subset is required for initial use. +a small subset is required for initial use. You will learn more later in this page. ## Resource Types @@ -60,16 +61,16 @@ infrastructure platform. Providers are distributed separately from Terraform, but Terraform can automatically install most providers when initializing a working directory. -To manage resources, a Terraform module must specify the required providers. Refer to -[Provider Requirements](/terraform/language/providers/requirements) for additional information. +To manage resources, a Terraform module must specify the required providers, see +[Provider Requirements](/terraform/language/providers/requirements). Most providers need some configuration to access their remote API, -which is provided by the root module. Refer to -[Provider Configuration](/terraform/language/providers/configuration) for additional information. +which is provided by the root module, see +[Provider Configuration](/terraform/language/providers/configuration) Based on a resource type's name, Terraform can usually determine which provider to use. By convention, resource type names start with their provider's preferred local name. -When using multiple configurations of a provider or non-preferred local provider names, +When using multiple configurations of a provider (or non-preferred local provider names), you must use [the `provider` meta-argument](/terraform/language/meta-arguments/resource-provider) to manually choose a provider configuration. @@ -83,7 +84,7 @@ The values for resource arguments can make full use of [expressions](/terraform/language/expressions) and other dynamic Terraform language features. -[Meta-arguments](#meta-arguments) are defined by Terraform +[Meta-Arguments](#meta-arguments) are defined by Terraform itself and apply across all resource types. ### Documentation for Resource Types @@ -91,13 +92,14 @@ and apply across all resource types. Every Terraform provider has its own documentation, describing its resource types and their arguments. -Some provider documentation is still part of Terraform's core documentation, +Some provider documentation is still part of Terraform's core documentation but the [Terraform Registry](https://registry.terraform.io/browse/providers) -is the main home for all publicly available provider docs. +is now the main home for all publicly available provider docs. When viewing a provider's page on the Terraform -Registry, you can click the **Documentation** link in the header to browse its -documentation. The documentation is versioned. To choose a different version of the provider documentation, click on the version in the provider breadcrumbs to choose a version from the drop-down menu. +Registry, you can click the "Documentation" link in the header to browse its +documentation, which is versioned. Use the dropdown version menu in the header +to switch the version. ## Meta-Arguments @@ -131,8 +133,8 @@ resource "aws_instance" "example" { } ``` -[Custom condition checks](/terraform/language/expressions/custom-conditions#preconditions-and-postconditions) -can help capture assumptions so that future maintainers +[Custom Condition Checks](/terraform/language/expressions/custom-conditions#preconditions-and-postconditions) +can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers to diagnose issues in their configuration.