Skip to content
This repository has been archived by the owner on Feb 5, 2020. It is now read-only.

Moving state after cluster creation breaks manifest generation #1135

Closed
discordianfish opened this issue Jun 20, 2017 · 10 comments
Closed

Moving state after cluster creation breaks manifest generation #1135

discordianfish opened this issue Jun 20, 2017 · 10 comments
Labels

Comments

@discordianfish
Copy link
Contributor

Looks like moving the state directory (build/cluster) causes an error on the next plan/apply:

* module.bootkube.template_dir.bootkube-bootstrap: template_dir.bootkube-bootstrap: could not generate output checksum: lstat /home/fish/work/x/y/tectonic-installer-config/state/staging/.terraform/modules/0391b25f8df845582a6b6a555d42296b/resources/bootstrap-manifests: no such file or directory
* module.tectonic.template_dir.tectonic: 1 error(s) occurred:

This is because template_dir.bootkube's source_dir is a fully qualified path:

                "template_dir.bootkube": {
...
                    "primary": {
                        "id": "c07fc3f84d2a3e5dfecc6e8c2c73b9ea65901163",
                        "attributes": {
                            "destination_dir": "./generated/manifests",
                            "id": "c07fc3f84d2a3e5dfecc6e8c2c73b9ea65901163",
                            "source_dir": "/home/fish/work/x/y/tectonic-installer-config/state/staging/.terraform/modules/0391b25f8df845582a6b6a555d42296b/resources/manifests",

I don't think there is a reason to keep the fully qualified path in the state. If it had only the relative path in there things would be more portable. This is especially important in disaster recovery etc where you might not have everything 1:1 in place and these things would cause additional TTR.

@Quentin-M
Copy link
Contributor

Quentin-M commented Jun 21, 2017

We are using the path.module interpolation variable, which is the one that is recommended by Terraform. This variable is not normalized as a path relative to the current working directory, as documented upstream.

hashicorp/terraform#7613
hashicorp/terraform#8204
hashicorp/terraform#10777
hashicorp/terraform#7927
hashicorp/terraform#8534
(and more)

We could try to experiment dropping path.module somehow.

@Quentin-M Quentin-M added this to the Sprint 4 milestone Jun 21, 2017
@Quentin-M Quentin-M self-assigned this Jun 26, 2017
@sym3tri sym3tri modified the milestones: Sprint 4, Sprint 5 Jun 30, 2017
@discordianfish
Copy link
Contributor Author

Just looked into this again and all the issues read like it's not-a-big-deal but I have no idea how this is suppose to work. It looks to me like in combination with template_dir it's impossible to get around this issue and it can only be fixed upstream.

@rithujohn191 rithujohn191 modified the milestones: Sprint 6, Sprint 5 Jul 10, 2017
@scottames
Copy link

Running into the same here. For the short term, I was able to hack around it by replacing the ${path.module with relative paths. This only works if following the CoreOS aws-terraform documentation; more specifically if running terraform in the root of the tectonic-installer directory. Unfortunately, this is not very flexible and will break when referencing platforms/aws as a module, which could be desired.

These changes are aws specific, but should work for other modules as well. (not long term, in my opinion)

diff --git a/modules/bootkube/assets.tf b/modules/bootkube/assets.tf
index eb5dc3f..b9f8d67 100644
--- a/modules/bootkube/assets.tf
+++ b/modules/bootkube/assets.tf
@@ -15,7 +15,7 @@ data "null_data_source" "etcd" {
 
 resource "template_dir" "experimental" {
   count           = "${var.experimental_enabled ? 1 : 0}"
-  source_dir      = "${path.module}/resources/experimental/manifests"
+  source_dir      = "modules/bootkube/resources/experimental/manifests"
   destination_dir = "./generated/experimental"
 
   vars {
@@ -27,7 +27,7 @@ resource "template_dir" "experimental" {
 
 resource "template_dir" "bootstrap-experimental" {
   count           = "${var.experimental_enabled ? 1 : 0}"
-  source_dir      = "${path.module}/resources/experimental/bootstrap-manifests"
+  source_dir      = "modules/bootkube/resources/experimental/bootstrap-manifests"
   destination_dir = "./generated/bootstrap-experimental"
 
   vars {
@@ -39,7 +39,7 @@ resource "template_dir" "bootstrap-experimental" {
 
 resource "template_dir" "etcd-experimental" {
   count           = "${var.experimental_enabled ? 1 : 0}"
-  source_dir      = "${path.module}/resources/experimental/etcd"
+  source_dir      = "modules/bootkube/resources/experimental/etcd"
   destination_dir = "./generated/etcd"
 
   vars {
@@ -50,7 +50,7 @@ resource "template_dir" "etcd-experimental" {
 
 # Self-hosted manifests (resources/generated/manifests/)
 resource "template_dir" "bootkube" {
-  source_dir      = "${path.module}/resources/manifests"
+  source_dir      = "modules/bootkube/resources/manifests"
   destination_dir = "./generated/manifests"
 
   vars {
@@ -110,7 +110,7 @@ resource "template_dir" "bootkube" {
 
 # Self-hosted bootstrapping manifests (resources/generated/manifests-bootstrap/)
 resource "template_dir" "bootkube-bootstrap" {
-  source_dir      = "${path.module}/resources/bootstrap-manifests"
+  source_dir      = "modules/bootkube/resources/bootstrap-manifests"
   destination_dir = "./generated/bootstrap-manifests"
 
   vars {
@@ -157,7 +157,7 @@ resource "local_file" "etcd_client_key" {
 
 # kubeconfig (resources/generated/auth/kubeconfig)
 data "template_file" "kubeconfig" {
-  template = "${file("${path.module}/resources/kubeconfig")}"
+  template = "${file("modules/bootkube/resources/kubeconfig")}"
 
   vars {
     ca_cert      = "${base64encode(var.ca_cert == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_cert)}"
@@ -174,7 +174,7 @@ resource "local_file" "kubeconfig" {
 
 # bootkube.sh (resources/generated/bootkube.sh)
 data "template_file" "bootkube-sh" {
-  template = "${file("${path.module}/resources/bootkube.sh")}"
+  template = "${file("modules/bootkube/resources/bootkube.sh")}"
 
   vars {
     bootkube_image = "${var.container_images["bootkube"]}"
@@ -188,5 +188,5 @@ resource "local_file" "bootkube-sh" {
 
 # bootkube.service (available as output variable)
 data "template_file" "bootkube_service" {
-  template = "${file("${path.module}/resources/bootkube.service")}"
+  template = "${file("modules/bootkube/resources/bootkube.service")}"
 }
diff --git a/modules/tectonic/assets.tf b/modules/tectonic/assets.tf
index 30b21e6..b459ea4 100644
--- a/modules/tectonic/assets.tf
+++ b/modules/tectonic/assets.tf
@@ -5,7 +5,7 @@ resource "random_id" "cluster_id" {
 
 # Kubernetes Manifests (resources/generated/manifests/)
 resource "template_dir" "tectonic" {
-  source_dir      = "${path.module}/resources/manifests"
+  source_dir      = "modules/tectonic/resources/manifests"
   destination_dir = "./generated/tectonic"
 
   vars {

@discordianfish
Copy link
Contributor Author

I've tried that but it breaks sometime. Not sure where exactly tbh, I think it was creating a new cluster which didn't work (while updated worked). Have you tried both?

@robszumski
Copy link
Member

We might need to tackle this issue while fixing #1363

@scottames
Copy link

@discordianfish both worked for me initially. I'll do some more testing in the next few days to be sure.
@robszumski 👍

@discordianfish
Copy link
Contributor Author

I don't know how this can work, given that the Makefile changes into the build/ directory. What might work is using ${path.root}/modules/... though. Just came across that.

@so0k
Copy link

so0k commented Aug 31, 2017

I do believe this is an upstream issue in the local and template providers, for our set up with a remote backend for the state file, we decided to standardise the output_path as an absolute path across the team.

@discordianfish
Copy link
Contributor Author

Yes this probably requires upstream changes. I worked around this by using a linux container in which I mount the state to a fixed, absolute location.

@sym3tri sym3tri added help wanted and removed size/M labels Sep 5, 2017
@sym3tri
Copy link
Contributor

sym3tri commented Jun 25, 2018

Since we're working on the next generation of the installer and there is a sufficient workaround won't be fixing this in this repo.

See our blog for any additional details:
https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

8 participants