Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement/dry provisioning #86

Merged
merged 11 commits into from
Sep 13, 2018
Merged

Improvement/dry provisioning #86

merged 11 commits into from
Sep 13, 2018

Conversation

felddy
Copy link
Member

@felddy felddy commented Sep 12, 2018

Dynamic provisioning in the house. This works around this issue: hashicorp/terraform#953

@felddy felddy self-assigned this Sep 12, 2018
@felddy felddy requested a review from a team September 12, 2018 15:49
dav3r
dav3r previously approved these changes Sep 13, 2018
Copy link
Member

@dav3r dav3r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks good to me. I tested it in my local workspace and it seems to work as advertised, but I didn't try it in Production yet.

@dav3r dav3r dismissed their stale review September 13, 2018 14:44

Hold the phone- I think I found a bug. Checking now...

@dav3r
Copy link
Member

dav3r commented Sep 13, 2018

There is a bug, but it's not in your code. However, this seems like a good place to fix it.

The problem is in these four places:

In all of the above, the code references ${aws_instance.cyhy_nmap.id} or ${aws_instance.cyhy_nessus.id}, which is the first nmap/nessus instance. What we want to do is reference the specific instance that matches the count variable:

  • ${aws_instance.cyhy_nmap.*.id[count.index]}
  • ${aws_instance.cyhy_nessus.*.id[count.index]}

Unfortunately, when I make that change, I run into this open Terraform issue:
hashicorp/terraform#14536

Example: In my local workspace, I have zero running nmap instances and I want to ramp up to three of them. The existing code has no problem with this because every nmap_cyhy_runner_data_attachment references (incorrectly) the first nmap instance (${aws_instance.cyhy_nmap.id}). The corrected code however fails with an error like this:

* aws_volume_attachment.nmap_cyhy_runner_data_attachment: 1 error(s) occurred:

* aws_volume_attachment.nmap_cyhy_runner_data_attachment[2]: index 2 out of range for list aws_instance.cyhy_nmap.*.id (max 2) in:

${aws_instance.cyhy_nmap.*.id[count.index]}

Anybody have any ideas on how to fix this?

@jsf9k
Copy link
Member

jsf9k commented Sep 13, 2018

The bug report you referenced says:

Replacing array syntax with element() function (as shown in comments in code) gives the desired behaviour.

Did you try that?

@dav3r
Copy link
Member

dav3r commented Sep 13, 2018

I missed that- trying it now.

@dav3r
Copy link
Member

dav3r commented Sep 13, 2018

That element() workaround worked- thanks @jsf9k. I had to make one other similar change to nmap.template and nessus.template get it to work. I'm going to do a little more testing, then commit the changes.

Copy link
Member

@jsf9k jsf9k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider my comments before merging.

def main():
# get workspace
workspace = get_terraform_workspace()
print('Current Terraform workspace = ', workspace)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about using logging instead of print statements?

WORKSPACE_CONFIGS constant below.
'''

import sys
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like imports to be sorted in three groups in this order:

  • Python built-ins
  • External dependencies
  • Local dependencies

Then, within each group, I like the imports to be sorted alphabetically. This makes it easy to find imports as the list grows in length.

What do you think about doing that here?


Due to a terraform limitation, modules can not be scaled with the "count"
keyword the same way resource can. This leads to a great deal of copying and
pasting in order to get provisioners and other modules to execute correctlyself.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove correctlyself and replace with correctly. 😃

dav3r and others added 6 commits September 13, 2018 12:01
…st nmap/nessus instance; see #86 (comment)

Update nmap and nessus dynamic templates to work correctly after the change above
@dav3r
Copy link
Member

dav3r commented Sep 13, 2018

@jsf9k , the element() workaround appears to have a problem.

Example: When I attempt to scale up from 1 to 2 nmap instances, Terraform wants to recreate nmap_cyhy_runner_data_attachment[0], which would trigger the destruction of the cyhy_nmap[0] instance (since that is how we safely destroy the volume attachment):

-/+ aws_volume_attachment.nmap_cyhy_runner_data_attachment[0] (new resource required)
      id:                                        "vai-3307678400" => <computed> (forces new resource)
      device_name:                               "/dev/xvdb" => "/dev/xvdb"
      instance_id:                               "i-06916a3ed89b85a52" => "${element(aws_instance.cyhy_nmap.*.id, count.index)}" (forces new resource)
      skip_destroy:                              "true" => "true"
      volume_id:                                 "vol-02311105b958b2c31" => "vol-02311105b958b2c31"

It is the same issue as mentioned in this comment on Terraform issue 14536.

We need to figure out how to get Terraform to avoid destroying the existing data attachment(s) in this scenario.

…e parameters, but use element() syntax to access correct instance id in destroy provisioners. This avoids the situation where scaling up nmap/nessus instances would result in the destruction of existing volume attachments, as described in #86 (comment).
@dav3r
Copy link
Member

dav3r commented Sep 13, 2018

My last commit fixes the issue I described here.

Copy link
Member

@dav3r dav3r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This now gets my official 👍

@dav3r dav3r merged commit 9809302 into develop Sep 13, 2018
@dav3r dav3r deleted the improvement/dry_provisioning branch September 13, 2018 20:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants