Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-configuring remote state raises conflict on pull #5410

Closed
robzienert opened this issue Mar 2, 2016 · 6 comments · Fixed by #7320
Closed

Re-configuring remote state raises conflict on pull #5410

robzienert opened this issue Mar 2, 2016 · 6 comments · Fixed by #7320

Comments

@robzienert
Copy link
Contributor

I'm currently working through a POC using CircleCI to manage plans & applies of Terraform state, using S3 as the remote backend. Circle won't be keeping the filesystem around, so relying on the .terraform folders locally to stick around isn't possible. When doing a plan (when a PR is opened), I run the following commands:

  1. terraform remote config [...]
  2. terraform remote pull
  3. terraform plan -out plan.out

First pass through worked fine and I could apply state, but if I test the condition where .terraform does not exist, I'm getting unexpected behaviors:

terraform remote config -backend=s3 -backend-config="bucket=foo" -backend-config="key=states/aws/s3_preprod_useast1/terraform.tfstate" -backend-config="region=us-east-1"
Initialized blank state with remote state enabled!
Error while performing the initial pull. The error message is shown
below. Note that remote state was properly configured, so you don't
need to reconfigure. You can now use `push` and `pull` directly.

Unknown refresh result: Local and remote state conflict, manual resolution required

For giggles, I run pull just to make sure I'm not crazy:

$ terraform remote pull
Failed to read state: Unknown refresh result: Local and remote state conflict, manual resolution required

The local state looks like this:

# .terraform/terraform.tfstate
{
    "version": 1,
    "serial": 0,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "foo",
            "key": "states/aws/s3_preprod_useast1/terraform.tfstate",
            "region": "us-east-1"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {}
        }
    ]
}

And the remote state in s3 looks like this:

# s3://foo/states/aws/s3_preprod_useast1/terraform.tfstate
{
    "version": 1,
    "serial": 0,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "foo",
            "key": "states/aws/s3_preprod_useast1/terraform.tfstate",
            "region": "us-east-1"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "aws_s3_bucket.rob_test": {
                    "type": "aws_s3_bucket",
                    "primary": {
                        "id": "foo-rob-test",
                        "attributes": {
                            "acl": "private",
                            "arn": "arn:aws:s3:::foo-rob-test",
                            "bucket": "foo-rob-test",
                            "cors_rule.#": "0",
                            "force_destroy": "false",
                            "hosted_zone_id": "xxxx",
                            "id": "foo-rob-test",
                            "policy": "{\"Statement\":[{\"Action\":\"s3:GetObject\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"*\"},\"Resource\":\"arn:aws:s3:::foo-rob-test/*\",\"Sid\":\"AddPerm\"}],\"Version\":\"2008-10-17\"}",
                            "region": "us-east-1",
                            "tags.#": "0",
                            "website.#": "0"
                        }
                    }
                }
            }
        }
    ]
}

I would have expected that remote config would've actually pulled the state, but it doesn't. For extra sanity, I also tried adding "-pull=true" while running remote config, just in case there was a documentation breakdown.

I verified that everything works fine if I just pull the remote state:

$ aws s3 cp s3://foo/states/aws/s3_preprod_useast1/terraform.tfstate terraform.tfstate --region us-east-1
$ mv terraform.tfstate .terraform/terraform.tfstate
$ terraform remote pull
Local and remote state in sync

I'm on 0.6.12, but it was happening on 0.6.11 as well.

@robzienert robzienert changed the title Configuring remote state causes conflict on initial pull Re-configuring remote state raises conflict on pull Mar 2, 2016
@tomdavidson
Copy link

Fails the same on 0.6.14 when I included the remote state as a module in my main.tf.

BUT, if I have a plan that is only the module, then it succeeds with out issue (i can not destroy).

@fwisehc
Copy link

fwisehc commented May 18, 2016

I have the same issue

@apparentlymart
Copy link
Contributor

Hi @robzienert! Sorry for the troubles here.

I want to make sure I'm understanding the scenario correctly: you said you saw this when you did terraform remote config without the .terraform directory present. Do you have the terraform.tfstate file (not in .terraform) present at this point, or do you have no local state at all?

I think my work in #6540 would actually unintentionally fix this problem for you, because it adds a special case that allows Terraform to silently clobber a local state that has no resources in it. The goal of that exception was to allow running terraform remote config when an empty terraform.tfstate is already present, but I think it would also make Terraform automatically clean up your issue here.

However, I'm pretty sure it's a bug that this arose in the first place, assuming that you were starting from a condition of having no local state whatsoever... so we should probably get to the bottom of that rather than just papering over it with the change in #6540.

@dtolnay
Copy link
Contributor

dtolnay commented Jul 6, 2016

I have a fix in #7320 which allows refreshing a local state with no resources.

@dtolnay
Copy link
Contributor

dtolnay commented Sep 2, 2016

Still waiting on a review of #7320.

@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants