-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Multiple Atlantis Servers #653
Comments
A custom workflow would be able to call a script you created on the Atlantis server capable of making that distinction. It'd be pretty simple too, basically "if environment execute plan/apply, else exit 0' |
I've been reviewing the workflow/scripting possibilities using atlantis.yaml and repos.yaml. It looks like I would be able to do something like this: atlantis.yaml (uploaded to repo)version: 3
repo.yaml (passed to localprovider's atlantis server with --repo-config)workflows: Do you see any issues with this solution? |
There's no way built-in to Atlantis so the direction you're working in is on the right track. I haven't tried this myself so I don't know the downsides. It should be pretty simple to test though. If you work through it, please report back here for the benefit of other users. |
Fwiw, #326 and #310 were to address a similar use-case, however, we have abandoned both. Now we just rely on cloudposse-archives#23. We currently run multiple atlantis servers, but practice a poly-repo strategy, with a centralized module catalog. We run one atlantis server per AWS account, and each AWS account gets it's own repo / |
Also, #249 is related |
Thank you both! It seems clear that we'll need to use a single Atlantis server and GitHub repository. Some of the resulting warnings that I got were due a related issue. I'll try to describe it below, but please let me know if you'd prefer that I close this issue and open a new one. I'm now trying to use Atlantis on a nested folder structure with custom workflows to use Terragrunt. ├── Github Enterprise. Is it expected behavior that use of workflows changes the working directory to Provider 1's subfolder? The only workaround we've come up with is to avoid custom workflows and use a wrapper shell script named 'atlantis' to intercept the default terraform command and pass appropriate parameters to terragrunt. Again, thanks for the help and please let me know if you'd prefer that I open a new issue for this question. |
Hey Devin, your directory diagrams are kinda hard to follow. It looks like your Subfolder for provider 1 is outside your repository. Is that correct? Do you have those folders on the Atlantis server itself instead of in a Terraform repo? |
I'll see if I can clean up the formatting when I get to my desk. If a custom workflow for subfolderA is used to call terragrunt against subfolderB's new resource and a terraform.tfvars files exists under subfolderA then the working directory of the command will be subfolderA. Since the working directory is incorrect terragrunt will find subfolderA's terraform.tfvars (which defines only remote state) but not the new resource. |
So like:
Atlantis shouldn't be doing this. It will execute the command from within the directory configured by version: 3
projects:
- dir: subfolderA
workflow: pwd
- dir: subfolderA/subfolderB
workflow: pwd
workflows:
pwd:
plan:
steps:
- run: pwd What does the above workflow do? I would expect if |
I think the issue is that I'm expecting to retain the pwd of changed tfvars file, regardless of the workflow in use. I'm trying to avoid defining each subfolder.
With this directory structure, it's not feasible to define a project for each subfolder. Right now, for just this cloud account, I'm using something like:
Since each new resource is in its own subfolder, we'd have to add projects for each new VM/resource to make this work. Here's an example of what it sounds like workflows would require:
Is this right? Might it be possible to do something like this?
It would, of course, need to be able to call Terragrunt with a working directory of \AWS-Shared\IT\VM. The odd part I was trying to describe in my previous post is that if terraform.tfvars doesn't exist under the workflow's defined directory, Atlantis does use the working directory of the changed file. |
Okay I see the problem now. There's no support for wildcards in dirs. I think #500 is what you need. Otherwise you'll have to write a very smart workflow that handles things or you'll need to creat an atlantis.yaml that defines all the directories.
I still don't understand this. If this is something you think of a bug can you provide me a fully spec'd way to reproduce with an example repo and |
#500 would help, but seems like overkill for this issue. It's also not yet available :). I'm curious, why does the use of workflows change the working directory? How to reproduce: The inconsistency in working directory based on the existence of terraform.tfvars under the workflow dir is more difficult to reproduce. I did it by deleting the file from the cache directory in the Atlantis container. That behavior confused me but isn't really the issue I'm reporting. Since I'd prefer not to rely on automatically generated atlantis.yaml projects, I think I'm going to have to resort to dropping workflow usage and using a terraform shell script instead. |
Yes I agree that your path forward is your shell script. Can you provide a repro of "workflows changing the directory" issue without using terragrunt? |
Closing because I haven't heard back. |
We’d like to use multiple Altantis servers, with each running local to its provider’s resources.
The problem with this (design 1) is that both Atlantis servers would respond to the webhooks received by GitHub.
Design 1: Issue, both Altantis Servers react to webhooks from GitHub
├── Github Enterprise
│ └─── Repository
│ ├── Provider A subfolder
│ └── Provider B subfolder
├── Atlantis Server for Provider A
└── Atlantis Server for Provider B
Based on the design above, is there a way to configure Altantis to do one of the following?
The closest setting I was able to find was the atlantis.yaml autoplan (when_modified) parameter. https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#use-cases
Would this or would any would other native Atlantis functionality meet these design criteria?
I understand that intercepting / conditionally forwarding these webhooks via an API gateway or load balancer might work, but I would like to avoid adding complexity to the design.
A second idea I had was using multiple Atlantis servers, using code-owners to permission the folders to different service accounts using code owners.
Design 2: 2 Atlantis Servers, subfolders with CodeOwners
├── Github Enterprise.
│ └─── Repository
│ ├─── Subfolder for Provider 1
│ │ └─── CODEOWNER=atlantis-provider-1
│ └─── Subfolder for Provider 2
│ └─── CODEOWNER=atlantis-provider-2
├── Atlantis Server
│ └─── gh-user=atlantis-provider-1
└── Atlantis Server 2
└─── gh-user=atlantis-provider-2
Lastly, if design 2 isn’t feasible then it seems like multiple repositories might be the simplest way forward:
Design 3 - Multiple Repos
├── Github Enterprise.
│ ├─── Repository for Provider 1
│ └─── Repository for Provider 2
├── Atlantis Server 1
└── Atlantis Server 2
Can you provide any recommendations on a solution?
Thanks!
Devin Slick
The text was updated successfully, but these errors were encountered: