You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using TFC the plan output, plan and apply runs, workspace, and state are managed by TFC remotely which is not compatible with the way Atmos treat workspaces, plan output, and remote state generation today.
TFC requires the use of the cloud {} block for the state instead of the currently supported backend types in atmos:
The workspace with the name my-poc will be created automatically if it does not exist, therefore Atmos should create this backend.tf file automatically based on a unique identifier that could be maybe something like {workspace} or a combination of {name_pattern}+{workspace} the important distinction here is that these names need to be unique across all the Org inside TFC, Ideally, this could be configurable inside atmos.yaml
TFC does not support terraform plan -out flag so this will have to be disabled when TFC mode is enabled.
Atmos terraform apply should not look for a saved plan since this is handled by the terraform binary as a API call automatically.
Atmos should not look for the workspace locally if that is what is doing today, instead when TFC is enabled it should just run terraform init and that will create the workspace automatically: https://developer.hashicorp.com/terraform/cloud-docs/run/cli
Implicit Workspace Creation
If you configure the cloud block to use a workspace that doesn't yet exist in your organization, Terraform Cloud will create a new workspace with that name when you run terraform init. The output of terraform init will inform you when this happens.
Expected Behavior
no error when running using the cloud block.
Use Case
Currently manual steps are required to make it work with TFC
no, in API driven workflows the only difference is that the git events are
managed by the user through github workflows but it triggers the same api
calls to interact with TFC, you still use the UI to approve , apply
sentinel policies etc. Instead of the TFC github up managing the event.
in local mode it behaves as an storage backend, but at that point I don't
see the point of using TFC since it is the same as an S3 bucket.
I'm using api driven workflows in my case.
Describe the Feature
When using TFC the plan output, plan and apply runs, workspace, and state are managed by TFC remotely which is not compatible with the way Atmos treat workspaces, plan output, and remote state generation today.
cloud {}
block for the state instead of the currently supported backend types in atmos:The use of
remote {}
id not recommended as it is not being maintained by hashicorphttps://hangops.slack.com/archives/C0Z93TPFX/p1690241979618549
The workspace with the name
my-poc
will be created automatically if it does not exist, therefore Atmos should create this backend.tf file automatically based on a unique identifier that could be maybe something like{workspace}
or a combination of{name_pattern}+{workspace}
the important distinction here is that these names need to be unique across all the Org inside TFC, Ideally, this could be configurable insideatmos.yaml
TFC does not support
terraform plan -out
flag so this will have to be disabled when TFC mode is enabled.Atmos terraform apply should not look for a saved plan since this is handled by the terraform binary as a API call automatically.
Atmos should not look for the
workspace
locally if that is what is doing today, instead when TFC is enabled it should just runterraform init
and that will create the workspace automatically:https://developer.hashicorp.com/terraform/cloud-docs/run/cli
Expected Behavior
no error when running using the cloud block.
Use Case
Currently manual steps are required to make it work with TFC
Describe Ideal Solution
Ideally this will be added as a new intgration
Alternatives Considered
No response
Additional Context
https://developer.hashicorp.com/terraform/tutorials/automation/github-actions
The text was updated successfully, but these errors were encountered: