Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: data sources for AWS network planning #6819

Merged
merged 7 commits into from
Oct 13, 2016

Conversation

apparentlymart
Copy link
Contributor

@apparentlymart apparentlymart commented May 22, 2016

This is a bit of a grab-bag of AWS provider data sources that are aimed at a particular use-case: building and using VPCs and Subnets easily.

Some of the data sources in this set are not directly towards that goal, but enable it.

Here we have:

  • aws_subnet
  • aws_vpc
  • aws_region
  • aws_availability_zone and aws_availability_zones

Here is a simple example showing how a module can take a subnet id and automatically find the associated VPC:

variable "aws_subnet_id" {
    description = "Id of the subnet where the EC2 instance will be created"
}

data "aws_subnet" "target" {
    id = "${var.aws_subnet_id}"
}

resource "aws_security_group" "main" {
    vpc_id = "${data.aws_subnet.target.vpc_id}"

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["${var.cidr_block}"]
    }

    # ...
}

resource "aws_instance" "main" {
    # ...

    vpc_security_group_ids = ["${aws_security_group.main.id}"]
    subnet_id = "${var.aws_subnet_id}"
}

output "aws_instance_id" {
    value = "${aws_instance.main.id}"
}

This is an initial implementation of some conventions I came up with for mapping EC2 Describe actions to data sources. It includes ec2_filters.go, which is a re-usable helper function for easily implementing EC2 data sources that fit these conventions.

@apparentlymart apparentlymart changed the title [WIP] provider/aws: aws_subnet data source [WIP] provider/aws: aws_subnet and aws_vpc data sources May 22, 2016
@apparentlymart apparentlymart changed the title [WIP] provider/aws: aws_subnet and aws_vpc data sources [WIP] provider/aws: data sources for AWS network planning May 22, 2016
@apparentlymart
Copy link
Contributor Author

Note to self: once this is more complete, consider doing a little bit of rework of the code from #6911 to use the general ec2_filters.go function added here, rather than doing its own tag/filter handling.

@apparentlymart
Copy link
Contributor Author

A different patch introduced aws_availability_zones so this needs to be reworked to apply to that rather than introducing its own.

Since this is just some new data sources and doc updates I'm going to hold this until after 0.7 is out and try to land it for the first patch release after that.

@apparentlymart apparentlymart force-pushed the f-aws-vpc-data-sources branch 4 times, most recently from b97fd97 to 5174aa4 Compare July 17, 2016 21:33
@apparentlymart apparentlymart changed the title [WIP] provider/aws: data sources for AWS network planning provider/aws: data sources for AWS network planning Jul 17, 2016
@apparentlymart
Copy link
Contributor Author

Since 0.7 was delayed, maybe this can get in before that after all... I'm going to optimistically add the release-0.7 tag here but happy to remove it if this patch requires more iterations, rather than having those iterations block the release.

@jen20
Copy link
Contributor

jen20 commented Jul 27, 2016

@apparentlymart For now I'm going to remove release-0.7 from this, and tentatively put it in for a 0.7.1 - seem reasonable?

@apparentlymart
Copy link
Contributor Author

Totally fine @jen20, and sorry for the delayed reply.

@stack72
Copy link
Contributor

stack72 commented Aug 9, 2016

Hi @apparentlymart

Can you rebase this? :)

P.

@apparentlymart
Copy link
Contributor Author

@stack72... rebased as requested. 😀

@stack72
Copy link
Contributor

stack72 commented Aug 18, 2016

LOL @apparentlymart we need another rebase please :)

@apparentlymart
Copy link
Contributor Author

Rebased again. These conflicts have just been other changes to the data source list in the provider, so don't actually affect the implementations of these data sources... if someone could review the main implementation then I can resolve any minor conflicts myself before merging. 😀

return fmt.Errorf("bad cidr_block %s", attr["cidr_block"])
}
if attr["availability_zone"] != "us-west-2a" {
return fmt.Errorf("bad cidr_block %s", attr["cidr_block"])
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be printing out availablility_zone, not cidr_block.

@catsby
Copy link
Contributor

catsby commented Sep 12, 2016

Overall this looks good to me, however...

I don’t care for the buildEC2FilterList method. It strikes me as magical and maybe does too much. You give it an attr map to map keys/values (which I feel the key/values are backwards, but I digress…) and you give it meta to get them from (itself essentially a black box of everything). But If you happen to have a filter block in that black box, or a tags block, you’ll magically get filters from them, even if you don't want to, or at least weren't expecting that to happen.

The method itself is also undocumented, so how/when I would use this and how/when I would not isn't so clear, and there are no tests for it either. Is it only applicable for data sources? Or is it meant to start being used throughout the AWS provider?

I feel like a better method would be one that just accepts a map[string]interface{} of filter_name and value:

req.Filter = buildEC2FilterList(map[string]interface{}{
  "cidr": d.Get("cidr").(string),
  "tags": d.Get("tags"),
})

This would need some special tweaking for the filters block if we had one, maybe a special/reserved key string in the map.

It is more explicit (read: more work for implementers), but I think that's good and as a result you get more clarity as to what's happening. At least in my mind, there are less surprises.

I hope this feedback doesn't come off too much as a bikeshed argument, as the vast majority of this PR looks great!

Thoughts?

@zms
Copy link

zms commented Sep 23, 2016

@stack72: any chance we can have this in 0.7.5?

@apparentlymart
Copy link
Contributor Author

@zms I have a bit more work to do first, to respond to the review feedback... I hope to get to this soon.

In an attempt to always show "id" as computed we were producing a
synthetic diff for it, but this causes problems when the id attribute for
a particular data source is actually settable in configuration, since it
masks the setting from the config and forces it to always be empty.

Instead, we'll set it conditionally so that a value provided in the config
can shine through when available.
These functions can be used within various EC2 data sources to support
querying by filter. The following cases are supported:

- Filtering by exact equality with single attribute values
- Filtering by EC2 tag key/value pairs
- Explicitly specifying raw EC2 filters in config

This should cover most of the filter use-cases for Terraform data
sources that are built on EC2's 'Describe...' family of functions.
This adds a singular data source in addition to the existing plural one.
This allows retrieving data about a specific AZ.

As a helper for writing reusable modules, the AZ letter (without its
usual region name prefix) is exposed so that it can be used in
region-agnostic mappings where a different value is used per AZ, such as
for subnet numbering schemes.
The primary purpose of this data source is to ask the question "what is
my current region?", but it can also be used to retrieve the endpoint
hostname for a particular (possibly non-current) region, should that be
useful for some esoteric case.
This example demonstrates both creating a network architecture *and* the
use of data resources to minimize the number of variables needed for a
child module by discovering additional data automatically.
@apparentlymart
Copy link
Contributor Author

apparentlymart commented Sep 24, 2016

Hi @catsby! Thanks for the review and sorry for the delay in following up.

I have reorganized what was originally the buildEC2FilterList function into three smaller functions, each dealing with a different part of this problem:

  • buildEC2AttributeFilterList produces a list of filters from a map from EC2 filter keys to single values. It's now the caller's responsibility to stringify the non-string attributes in the way expected by the AWS API, and it doesn't know anything about schema.ResourceData.
  • buildEC2TagFilterList produces a list of filters from a list of ec2.Tag objects.
  • buildEC2CustomFilterList takes a value conforming to the schema produced by ec2CustomFiltersSchema and uses it to produce a list of filters based directly on the user's provided values.

I loosely modeled this after how EC2 tags are handled in tags.go, and indeed make use of the tagsForMap function from there in order to extract the tag values from the ResourceData to pass into buildEC2TagFilterList, rather than re-inventing that wheel as I was before.

All three of these now have basic unit tests and hopefully-comprehensive-enough documentation.

Hopefully this new structure is easier to follow and makes the individual data source implementations more readable.

Copy link
Contributor

@catsby catsby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@catsby catsby merged commit 46ee2ef into master Oct 13, 2016
@imduffy15
Copy link
Contributor

I'm fairly sure the answer is no but.... is it possible to utilize this and the aws_vpc resource to ensure the default VPC is deleted in an account?

@stack72 stack72 deleted the f-aws-vpc-data-sources branch October 31, 2016 11:44
@ghost
Copy link

ghost commented Apr 20, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants