Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data sources not refreshed on destroy #27172

Closed
eaterm opened this issue Dec 7, 2020 · 5 comments · Fixed by #27408
Closed

Data sources not refreshed on destroy #27172

eaterm opened this issue Dec 7, 2020 · 5 comments · Fixed by #27408
Assignees
Labels
bug confirmed a Terraform Core team member has reproduced this issue v0.14 Issues (primarily bugs) reported against v0.14 releases

Comments

@eaterm
Copy link

eaterm commented Dec 7, 2020

Terraform Version

Terraform v0.14.0

Terraform Configuration Files

provider "kubernetes" {
  load_config_file       = false
  host                           = data.terraform_remote_state.cluster.outputs.endpoint
}
data "terraform_remote_state" "cluster" {
  backend = "s3"
  config = {
    bucket     = var.remote_state_bucket
    key          = var.remote_state_key
    region     = var.remote_state_region
  }
}
resource "kubernetes_namespace" "test" {
  metadata {
    name = "test"
  }
}

Debug Information

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials

Expected Behavior

That the destroy run refreshes the data from the terraform_remote_state and use it in the provider block.

Actual Behavior

When the destroy is run the data terraform_remote_state is not refreshed when a plan is created. When the plan is executed the provider fails as there is no data from the remote state.

Additional Context

We use one terraform run to create a Kubernetes cluster. Then use a second one to create Kubernetes resources on the cluster. The terraform_remote_state data resource is used to get the hostname from the cluster from the state of the first run. When running a destroy the terraform_remote_state is not refreshed and the Kubernetes provider fails.

@eaterm eaterm added bug new new issue not yet triaged labels Dec 7, 2020
@apparentlymart apparentlymart added the v0.14 Issues (primarily bugs) reported against v0.14 releases label Dec 8, 2020
@eaterm
Copy link
Author

eaterm commented Dec 8, 2020

Did some more testing. When using a data source from the same terraform run it also doesn't refresh.
The Kubernetes provider is never usable as the data.aws_eks_cluster_auth.default_auth.token doesn't get refreshed on a destroy.

provider "kubernetes" {
  load_config_file       = false
  host                   = aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.default.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.default_auth.token
}

@jbardin jbardin added confirmed a Terraform Core team member has reproduced this issue and removed new new issue not yet triaged labels Dec 8, 2020
@jbardin jbardin self-assigned this Dec 8, 2020
@vikinghts
Copy link

Our full test street is depending on this bug. Would love to see it fixed. 👍

@jbardin
Copy link
Member

jbardin commented Dec 17, 2020

Hello,

For any that aren't aware, the workaround of refreshing, or applying an empty plan immediately before a destroy will update the data source in the state.

The destroy process itself has many shortcomings, though because the many other configurations that fail never worked previously, they are obviously not noticed as regressions. In this case, the removal of the separate refresh process means that data sources are not updated during an explicit destroy operation. While removing refresh fixes numerous other bugs, it seems it created another case that does not work well with the destroy command.

I think we can re-introduce a sort of "data refresh" into the destroy-plan graph, and in the process also take care of some other outstanding issues, like locals and outputs not being usable in provider configurations during destroy.

@eaterm
Copy link
Author

eaterm commented Dec 18, 2020

@jbardin Thank you for the workaround, this seems to help us with the issue we have.

@ghost
Copy link

ghost commented Feb 11, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked as resolved and limited conversation to collaborators Feb 11, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug confirmed a Terraform Core team member has reproduced this issue v0.14 Issues (primarily bugs) reported against v0.14 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants