diff --git a/CHANGELOG.md b/CHANGELOG.md index 50d25d909b27..85691cc75061 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,8 +22,9 @@ BACKWARDS INCOMPATIBILITIES / NOTES: * `azurerm_virtual_machine` computer_name now Required * `aws_db_instance` now defaults `publicly_accessible` to false * `openstack_fw_policy_v1` now correctly applies rules in the order they are specified. Upon the next apply, current rules might be re-ordered. - * `atlas_artifact` resource has be depracated. Please use the new `atlas_artifact` Data Source + * `atlas_artifact` resource has be deprecated. Please use the new `atlas_artifact` Data Source * The `member` attribute of `openstack_lb_pool_v1` has been deprecated. Please ue the new `openstack_lb_member_v1` resource. + * All deprecated parameters are removed from all `CloudStack` resources FEATURES: @@ -38,6 +39,7 @@ FEATURES: * **New Data Source:** `aws_availability_zones` [GH-6805] * **New Data Source:** `aws_iam_policy_document` [GH-6881] * **New Data Source:** `aws_s3_bucket_object` [GH-6946] + * **New Data Source:** `aws_ecs_container_definition` [GH-7230] * **New Data Source:** `atlas_artifact` [GH-7419] * **New Interpolation Function:** `sort` [GH-7128] * **New Interpolation Function:** `distinct` [GH-7174] @@ -62,6 +64,7 @@ FEATURES: * **New Resource:** `aws_simpledb_domain` [GH-7600] * **New Resource:** `aws_opsworks_user_profile` [GH-6304] * **New Resource:** `aws_opsworks_permission` [GH-6304] + * **New Resource:** `aws_ami_launch_permission` [GH-7365] * **New Resource:** `openstack_blockstorage_volume_v2` [GH-6693] * **New Resource:** `openstack_lb_loadbalancer_v2` [GH-7012] * **New Resource:** `openstack_lb_listener_v2` [GH-7012] @@ -85,6 +88,7 @@ IMPROVEMENTS: * core: Support `.` in map keys [GH-7654] * command: Remove second DefaultDataDirectory const [GH-7666] * provider/aws: Add `dns_name` to `aws_efs_mount_target` [GH-7428] + * provider/aws: Add `force_destroy` to `aws_iam_user` for force-deleting access keys assigned to the user [GH-7766] * provider/aws: Add `option_settings` to `aws_db_option_group` [GH-6560] * provider/aws: Add more explicit support for Skipping Final Snapshot in RDS Cluster [GH-6795] * provider/aws: Add support for S3 Bucket Acceleration [GH-6628] @@ -118,6 +122,8 @@ IMPROVEMENTS: * provider/aws: Support `task_role_arn` on `aws_ecs_task_definition [GH-7653] * provider/aws: Support Tags on `aws_rds_cluster` [GH-7695] * provider/aws: Support kms_key_id for `aws_rds_cluster` [GH-7662] + * provider/aws: Allow setting a `poll_interval` on `aws_elastic_beanstalk_environment` [GH-7523] + * provider/aws: Add support for Kinesis streams shard-level metrics [GH-7684] * provider/azurerm: Add support for EnableIPForwarding to `azurerm_network_interface` [GH-6807] * provider/azurerm: Add support for exporting the `azurerm_storage_account` access keys [GH-6742] * provider/azurerm: The Azure SDK now exposes better error messages [GH-6976] @@ -128,6 +134,11 @@ IMPROVEMENTS: * provider/cloudstack: Add support for affinity groups to `cloudstack_instance` [GH-6898] * provider/cloudstack: Enable swapping of ACLs without having to rebuild the network tier [GH-6741] * provider/cloudstack: Improve ACL swapping [GH-7315] + * provider/cloudstack: Add project support to `cloudstack_network_acl` and `cloudstack_network_acl_rule` [GH-7612] + * provider/cloudstack: Add option to set `root_disk_size` to `cloudstack_instance` [GH-7070] + * provider/cloudstack: Do no longer force a new `cloudstack_instance` resource when updating `user_data` [GH-7074] + * provider/cloudstack: Add option to set `security_group_names` to `cloudstack_instance` [GH-7240] + * provider/cloudstack: Add option to set `affinity_group_names` to `cloudstack_instance` [GH-7242] * provider/datadog: Add support for 'require full window' and 'locked' [GH-6738] * provider/docker: Docker Container DNS Setting Enhancements [GH-7392] * provider/docker: Add `destroy_grace_seconds` option to stop container before delete [GH-7513] @@ -225,6 +236,9 @@ BUG FIXES: * provider/azurerm: `azurerm_virtual_machine` computer_name now Required [GH-7308] * provider/cloudflare: Fix issue upgrading CloudFlare Records created before v0.6.15 [GH-6969] * provider/cloudstack: Fix using `cloudstack_network_acl` within a project [GH-6743] + * provider/cloudstack: Fix refresing `cloudstack_network_acl_rule` when the associated ACL is deleted [GH-7612] + * provider/cloudstack: Fix refresing `cloudstack_port_forward` when the associated IP address is no longer associated [GH-7612] + * provider/cloudstack: Fix creating `cloudstack_network` with offerings that do not support specifying IP ranges [GH-7612] * provider/digitalocean: Stop `digitocean_droplet` forcing new resource on uppercase region [GH-7044] * provider/digitalocean: Reassign Floating IP when droplet changes [GH-7411] * provider/google: Fix a bug causing an error attempting to delete an already-deleted `google_compute_disk` [GH-6689] diff --git a/builtin/providers/atlas/provider.go b/builtin/providers/atlas/provider.go index 3d14edca8b58..14928de635cc 100644 --- a/builtin/providers/atlas/provider.go +++ b/builtin/providers/atlas/provider.go @@ -52,6 +52,7 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { return nil, err } } + client.DefaultHeader.Set(terraform.VersionHeader, terraform.VersionString()) client.Token = d.Get("token").(string) return client, nil diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index a9b49528c028..88bf4d0e70e3 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -243,6 +243,7 @@ func (c *Config) Client() (interface{}, error) { client.r53conn = route53.New(usEast1Sess) client.rdsconn = rds.New(sess) client.redshiftconn = redshift.New(sess) + client.simpledbconn = simpledb.New(sess) client.s3conn = s3.New(sess) client.sesConn = ses.New(sess) client.snsconn = sns.New(sess) @@ -323,7 +324,7 @@ func (c *Config) ValidateAccountId(accountId string) error { var addTerraformVersionToUserAgent = request.NamedHandler{ Name: "terraform.TerraformVersionUserAgentHandler", Fn: request.MakeAddToUserAgentHandler( - "terraform", terraform.Version, terraform.VersionPrerelease), + "terraform", terraform.VersionString()), } type awsLogger struct{} diff --git a/builtin/providers/aws/data_source_aws_ecs_container_definition.go b/builtin/providers/aws/data_source_aws_ecs_container_definition.go new file mode 100644 index 000000000000..ecc1b20b7c92 --- /dev/null +++ b/builtin/providers/aws/data_source_aws_ecs_container_definition.go @@ -0,0 +1,98 @@ +package aws + +import ( + "fmt" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ecs" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsEcsContainerDefinition() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsEcsContainerDefinitionRead, + + Schema: map[string]*schema.Schema{ + "task_definition": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "container_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + // Computed values. + "image": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "image_digest": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "cpu": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + "memory": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + "disable_networking": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + "docker_labels": &schema.Schema{ + Type: schema.TypeMap, + Computed: true, + Elem: schema.TypeString, + }, + "environment": &schema.Schema{ + Type: schema.TypeMap, + Computed: true, + Elem: schema.TypeString, + }, + }, + } +} + +func dataSourceAwsEcsContainerDefinitionRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecsconn + + desc, err := conn.DescribeTaskDefinition(&ecs.DescribeTaskDefinitionInput{ + TaskDefinition: aws.String(d.Get("task_definition").(string)), + }) + if err != nil { + return err + } + + taskDefinition := *desc.TaskDefinition + for _, def := range taskDefinition.ContainerDefinitions { + if aws.StringValue(def.Name) != d.Get("container_name").(string) { + continue + } + + d.SetId(fmt.Sprintf("%s/%s", aws.StringValue(taskDefinition.TaskDefinitionArn), d.Get("container_name").(string))) + d.Set("image", aws.StringValue(def.Image)) + d.Set("image_digest", strings.Split(aws.StringValue(def.Image), ":")[1]) + d.Set("cpu", aws.Int64Value(def.Cpu)) + d.Set("memory", aws.Int64Value(def.Memory)) + d.Set("disable_networking", aws.BoolValue(def.DisableNetworking)) + d.Set("docker_labels", aws.StringValueMap(def.DockerLabels)) + + var environment = map[string]string{} + for _, keyValuePair := range def.Environment { + environment[aws.StringValue(keyValuePair.Name)] = aws.StringValue(keyValuePair.Value) + } + d.Set("environment", environment) + } + + if d.Id() == "" { + return fmt.Errorf("container with name %q not found in task definition %q", d.Get("container_name").(string), d.Get("task_definition").(string)) + } + + return nil +} diff --git a/builtin/providers/aws/data_source_aws_ecs_container_definition_test.go b/builtin/providers/aws/data_source_aws_ecs_container_definition_test.go new file mode 100644 index 000000000000..4618085506ac --- /dev/null +++ b/builtin/providers/aws/data_source_aws_ecs_container_definition_test.go @@ -0,0 +1,62 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSEcsDataSource_ecsContainerDefinition(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckAwsEcsContainerDefinitionDataSourceConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_ecs_container_definition.mongo", "image", "mongo:latest"), + resource.TestCheckResourceAttr("data.aws_ecs_container_definition.mongo", "memory", "128"), + resource.TestCheckResourceAttr("data.aws_ecs_container_definition.mongo", "cpu", "128"), + resource.TestCheckResourceAttr("data.aws_ecs_container_definition.mongo", "environment.SECRET", "KEY"), + ), + }, + }, + }) +} + +const testAccCheckAwsEcsContainerDefinitionDataSourceConfig = ` +resource "aws_ecs_cluster" "default" { + name = "terraformecstest1" +} + +resource "aws_ecs_task_definition" "mongo" { + family = "mongodb" + container_definitions = < 60*time.Second { + errors = append(errors, fmt.Errorf( + "%q must be between 10s and 180s", k)) + } + return + }, + }, "autoscaling_groups": &schema.Schema{ Type: schema.TypeList, Computed: true, @@ -183,10 +200,6 @@ func resourceAwsElasticBeanstalkEnvironmentCreate(d *schema.ResourceData, meta i settings := d.Get("setting").(*schema.Set) solutionStack := d.Get("solution_stack_name").(string) templateName := d.Get("template_name").(string) - waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string)) - if err != nil { - return err - } // TODO set tags // Note: at time of writing, you cannot view or edit Tags after creation @@ -244,13 +257,25 @@ func resourceAwsElasticBeanstalkEnvironmentCreate(d *schema.ResourceData, meta i // Assign the application name as the resource ID d.SetId(*resp.EnvironmentId) + waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string)) + if err != nil { + return err + } + + pollInterval, err := time.ParseDuration(d.Get("poll_interval").(string)) + if err != nil { + pollInterval = 0 + log.Printf("[WARN] Error parsing poll_interval, using default backoff") + } + stateConf := &resource.StateChangeConf{ - Pending: []string{"Launching", "Updating"}, - Target: []string{"Ready"}, - Refresh: environmentStateRefreshFunc(conn, d.Id()), - Timeout: waitForReadyTimeOut, - Delay: 10 * time.Second, - MinTimeout: 3 * time.Second, + Pending: []string{"Launching", "Updating"}, + Target: []string{"Ready"}, + Refresh: environmentStateRefreshFunc(conn, d.Id()), + Timeout: waitForReadyTimeOut, + Delay: 10 * time.Second, + PollInterval: pollInterval, + MinTimeout: 3 * time.Second, } _, err = stateConf.WaitForState() @@ -272,24 +297,25 @@ func resourceAwsElasticBeanstalkEnvironmentUpdate(d *schema.ResourceData, meta i conn := meta.(*AWSClient).elasticbeanstalkconn envId := d.Id() - waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string)) - if err != nil { - return err - } + + var hasChange bool updateOpts := elasticbeanstalk.UpdateEnvironmentInput{ EnvironmentId: aws.String(envId), } if d.HasChange("description") { + hasChange = true updateOpts.Description = aws.String(d.Get("description").(string)) } if d.HasChange("solution_stack_name") { + hasChange = true updateOpts.SolutionStackName = aws.String(d.Get("solution_stack_name").(string)) } if d.HasChange("setting") { + hasChange = true o, n := d.GetChange("setting") if o == nil { o = &schema.Set{F: optionSettingValueHash} @@ -305,36 +331,50 @@ func resourceAwsElasticBeanstalkEnvironmentUpdate(d *schema.ResourceData, meta i } if d.HasChange("template_name") { + hasChange = true updateOpts.TemplateName = aws.String(d.Get("template_name").(string)) } - // Get the current time to filter describeBeanstalkEvents messages - t := time.Now() - log.Printf("[DEBUG] Elastic Beanstalk Environment update opts: %s", updateOpts) - _, err = conn.UpdateEnvironment(&updateOpts) - if err != nil { - return err - } + if hasChange { + // Get the current time to filter describeBeanstalkEvents messages + t := time.Now() + log.Printf("[DEBUG] Elastic Beanstalk Environment update opts: %s", updateOpts) + _, err := conn.UpdateEnvironment(&updateOpts) + if err != nil { + return err + } - stateConf := &resource.StateChangeConf{ - Pending: []string{"Launching", "Updating"}, - Target: []string{"Ready"}, - Refresh: environmentStateRefreshFunc(conn, d.Id()), - Timeout: waitForReadyTimeOut, - Delay: 10 * time.Second, - MinTimeout: 3 * time.Second, - } + waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string)) + if err != nil { + return err + } + pollInterval, err := time.ParseDuration(d.Get("poll_interval").(string)) + if err != nil { + pollInterval = 0 + log.Printf("[WARN] Error parsing poll_interval, using default backoff") + } - _, err = stateConf.WaitForState() - if err != nil { - return fmt.Errorf( - "Error waiting for Elastic Beanstalk Environment (%s) to become ready: %s", - d.Id(), err) - } + stateConf := &resource.StateChangeConf{ + Pending: []string{"Launching", "Updating"}, + Target: []string{"Ready"}, + Refresh: environmentStateRefreshFunc(conn, d.Id()), + Timeout: waitForReadyTimeOut, + Delay: 10 * time.Second, + PollInterval: pollInterval, + MinTimeout: 3 * time.Second, + } - err = describeBeanstalkEvents(conn, d.Id(), t) - if err != nil { - return err + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for Elastic Beanstalk Environment (%s) to become ready: %s", + d.Id(), err) + } + + err = describeBeanstalkEvents(conn, d.Id(), t) + if err != nil { + return err + } } return resourceAwsElasticBeanstalkEnvironmentRead(d, meta) @@ -547,11 +587,6 @@ func resourceAwsElasticBeanstalkEnvironmentSettingsRead(d *schema.ResourceData, func resourceAwsElasticBeanstalkEnvironmentDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticbeanstalkconn - waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string)) - if err != nil { - return err - } - opts := elasticbeanstalk.TerminateEnvironmentInput{ EnvironmentId: aws.String(d.Id()), TerminateResources: aws.Bool(true), @@ -560,19 +595,30 @@ func resourceAwsElasticBeanstalkEnvironmentDelete(d *schema.ResourceData, meta i // Get the current time to filter describeBeanstalkEvents messages t := time.Now() log.Printf("[DEBUG] Elastic Beanstalk Environment terminate opts: %s", opts) - _, err = conn.TerminateEnvironment(&opts) + _, err := conn.TerminateEnvironment(&opts) if err != nil { return err } + waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string)) + if err != nil { + return err + } + pollInterval, err := time.ParseDuration(d.Get("poll_interval").(string)) + if err != nil { + pollInterval = 0 + log.Printf("[WARN] Error parsing poll_interval, using default backoff") + } + stateConf := &resource.StateChangeConf{ - Pending: []string{"Terminating"}, - Target: []string{"Terminated"}, - Refresh: environmentStateRefreshFunc(conn, d.Id()), - Timeout: waitForReadyTimeOut, - Delay: 10 * time.Second, - MinTimeout: 3 * time.Second, + Pending: []string{"Terminating"}, + Target: []string{"Terminated"}, + Refresh: environmentStateRefreshFunc(conn, d.Id()), + Timeout: waitForReadyTimeOut, + Delay: 10 * time.Second, + PollInterval: pollInterval, + MinTimeout: 3 * time.Second, } _, err = stateConf.WaitForState() diff --git a/builtin/providers/aws/resource_aws_iam_user.go b/builtin/providers/aws/resource_aws_iam_user.go index 30282031db4e..7fd3509efb2e 100644 --- a/builtin/providers/aws/resource_aws_iam_user.go +++ b/builtin/providers/aws/resource_aws_iam_user.go @@ -48,6 +48,12 @@ func resourceAwsIamUser() *schema.Resource { Default: "/", ForceNew: true, }, + "force_destroy": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: "Delete user even if it has non-Terraform-managed IAM access keys", + }, }, } } @@ -132,39 +138,25 @@ func resourceAwsIamUserUpdate(d *schema.ResourceData, meta interface{}) error { } return nil } + func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error { iamconn := meta.(*AWSClient).iamconn // IAM Users must be removed from all groups before they can be deleted var groups []string - var marker *string - truncated := aws.Bool(true) - - for *truncated == true { - listOpts := iam.ListGroupsForUserInput{ - UserName: aws.String(d.Id()), - } - - if marker != nil { - listOpts.Marker = marker - } - - r, err := iamconn.ListGroupsForUser(&listOpts) - if err != nil { - return err - } - - for _, g := range r.Groups { + listGroups := &iam.ListGroupsForUserInput{ + UserName: aws.String(d.Id()), + } + pageOfGroups := func(page *iam.ListGroupsForUserOutput, lastPage bool) (shouldContinue bool) { + for _, g := range page.Groups { groups = append(groups, *g.GroupName) } - - // if there's a marker present, we need to save it for pagination - if r.Marker != nil { - *marker = *r.Marker - } - *truncated = *r.IsTruncated + return !lastPage + } + err := iamconn.ListGroupsForUserPages(listGroups, pageOfGroups) + if err != nil { + return fmt.Errorf("Error removing user %q from all groups: %s", d.Id(), err) } - for _, g := range groups { // use iam group membership func to remove user from all groups log.Printf("[DEBUG] Removing IAM User %s from IAM Group %s", d.Id(), g) @@ -173,6 +165,33 @@ func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error { } } + // All access keys for the user must be removed + if d.Get("force_destroy").(bool) { + var accessKeys []string + listAccessKeys := &iam.ListAccessKeysInput{ + UserName: aws.String(d.Id()), + } + pageOfAccessKeys := func(page *iam.ListAccessKeysOutput, lastPage bool) (shouldContinue bool) { + for _, k := range page.AccessKeyMetadata { + accessKeys = append(accessKeys, *k.AccessKeyId) + } + return !lastPage + } + err = iamconn.ListAccessKeysPages(listAccessKeys, pageOfAccessKeys) + if err != nil { + return fmt.Errorf("Error removing access keys of user %s: %s", d.Id(), err) + } + for _, k := range accessKeys { + _, err := iamconn.DeleteAccessKey(&iam.DeleteAccessKeyInput{ + UserName: aws.String(d.Id()), + AccessKeyId: aws.String(k), + }) + if err != nil { + return fmt.Errorf("Error deleting access key %s: %s", k, err) + } + } + } + request := &iam.DeleteUserInput{ UserName: aws.String(d.Id()), } diff --git a/builtin/providers/aws/resource_aws_kinesis_stream.go b/builtin/providers/aws/resource_aws_kinesis_stream.go index 1d70d847640a..ed731f1657d1 100644 --- a/builtin/providers/aws/resource_aws_kinesis_stream.go +++ b/builtin/providers/aws/resource_aws_kinesis_stream.go @@ -46,6 +46,13 @@ func resourceAwsKinesisStream() *schema.Resource { }, }, + "shard_level_metrics": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "arn": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -110,6 +117,9 @@ func resourceAwsKinesisStreamUpdate(d *schema.ResourceData, meta interface{}) er if err := setKinesisRetentionPeriod(conn, d); err != nil { return err } + if err := updateKinesisShardLevelMetrics(conn, d); err != nil { + return err + } return resourceAwsKinesisStreamRead(d, meta) } @@ -134,6 +144,10 @@ func resourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) erro d.Set("shard_count", state.shardCount) d.Set("retention_period", state.retentionPeriod) + if len(state.shardLevelMetrics) > 0 { + d.Set("shard_level_metrics", state.shardLevelMetrics) + } + // set tags describeTagsOpts := &kinesis.ListTagsForStreamInput{ StreamName: aws.String(sn), @@ -212,30 +226,74 @@ func setKinesisRetentionPeriod(conn *kinesis.Kinesis, d *schema.ResourceData) er } } - stateConf := &resource.StateChangeConf{ - Pending: []string{"UPDATING"}, - Target: []string{"ACTIVE"}, - Refresh: streamStateRefreshFunc(conn, sn), - Timeout: 5 * time.Minute, - Delay: 10 * time.Second, - MinTimeout: 3 * time.Second, + if err := waitForKinesisToBeActive(conn, sn); err != nil { + return err } - _, err := stateConf.WaitForState() - if err != nil { - return fmt.Errorf( - "Error waiting for Kinesis Stream (%s) to become active: %s", - sn, err) + return nil +} + +func updateKinesisShardLevelMetrics(conn *kinesis.Kinesis, d *schema.ResourceData) error { + sn := d.Get("name").(string) + + o, n := d.GetChange("shard_level_metrics") + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) + } + + os := o.(*schema.Set) + ns := n.(*schema.Set) + + disableMetrics := os.Difference(ns) + if disableMetrics.Len() != 0 { + metrics := disableMetrics.List() + log.Printf("[DEBUG] Disabling shard level metrics %v for stream %s", metrics, sn) + + props := &kinesis.DisableEnhancedMonitoringInput{ + StreamName: aws.String(sn), + ShardLevelMetrics: expandStringList(metrics), + } + + _, err := conn.DisableEnhancedMonitoring(props) + if err != nil { + return fmt.Errorf("Failure to disable shard level metrics for stream %s: %s", sn, err) + } + if err := waitForKinesisToBeActive(conn, sn); err != nil { + return err + } + } + + enabledMetrics := ns.Difference(os) + if enabledMetrics.Len() != 0 { + metrics := enabledMetrics.List() + log.Printf("[DEBUG] Enabling shard level metrics %v for stream %s", metrics, sn) + + props := &kinesis.EnableEnhancedMonitoringInput{ + StreamName: aws.String(sn), + ShardLevelMetrics: expandStringList(metrics), + } + + _, err := conn.EnableEnhancedMonitoring(props) + if err != nil { + return fmt.Errorf("Failure to enable shard level metrics for stream %s: %s", sn, err) + } + if err := waitForKinesisToBeActive(conn, sn); err != nil { + return err + } } return nil } type kinesisStreamState struct { - arn string - status string - shardCount int - retentionPeriod int64 + arn string + status string + shardCount int + retentionPeriod int64 + shardLevelMetrics []string } func readKinesisStreamState(conn *kinesis.Kinesis, sn string) (kinesisStreamState, error) { @@ -249,6 +307,7 @@ func readKinesisStreamState(conn *kinesis.Kinesis, sn string) (kinesisStreamStat state.status = aws.StringValue(page.StreamDescription.StreamStatus) state.shardCount += len(openShards(page.StreamDescription.Shards)) state.retentionPeriod = aws.Int64Value(page.StreamDescription.RetentionPeriodHours) + state.shardLevelMetrics = flattenKinesisShardLevelMetrics(page.StreamDescription.EnhancedMonitoring) return !last }) return state, err @@ -271,6 +330,25 @@ func streamStateRefreshFunc(conn *kinesis.Kinesis, sn string) resource.StateRefr } } +func waitForKinesisToBeActive(conn *kinesis.Kinesis, sn string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{"UPDATING"}, + Target: []string{"ACTIVE"}, + Refresh: streamStateRefreshFunc(conn, sn), + Timeout: 5 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err := stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for Kinesis Stream (%s) to become active: %s", + sn, err) + } + return nil +} + // See http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-using-sdk-java-resharding-merge.html func openShards(shards []*kinesis.Shard) []*kinesis.Shard { var open []*kinesis.Shard diff --git a/builtin/providers/aws/resource_aws_kinesis_stream_test.go b/builtin/providers/aws/resource_aws_kinesis_stream_test.go index 626f949f681e..974761182984 100644 --- a/builtin/providers/aws/resource_aws_kinesis_stream_test.go +++ b/builtin/providers/aws/resource_aws_kinesis_stream_test.go @@ -116,6 +116,52 @@ func TestAccAWSKinesisStream_retentionPeriod(t *testing.T) { }) } +func TestAccAWSKinesisStream_shardLevelMetrics(t *testing.T) { + var stream kinesis.StreamDescription + + ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + config := fmt.Sprintf(testAccKinesisStreamConfig, ri) + allConfig := fmt.Sprintf(testAccKinesisStreamConfigAllShardLevelMetrics, ri) + singleConfig := fmt.Sprintf(testAccKinesisStreamConfigSingleShardLevelMetric, ri) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisStreamDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: config, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream), + testAccCheckAWSKinesisStreamAttributes(&stream), + resource.TestCheckResourceAttr( + "aws_kinesis_stream.test_stream", "shard_level_metrics.#", ""), + ), + }, + + resource.TestStep{ + Config: allConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream), + testAccCheckAWSKinesisStreamAttributes(&stream), + resource.TestCheckResourceAttr( + "aws_kinesis_stream.test_stream", "shard_level_metrics.#", "7"), + ), + }, + + resource.TestStep{ + Config: singleConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream), + testAccCheckAWSKinesisStreamAttributes(&stream), + resource.TestCheckResourceAttr( + "aws_kinesis_stream.test_stream", "shard_level_metrics.#", "1"), + ), + }, + }, + }) +} + func testAccCheckKinesisStreamExists(n string, stream *kinesis.StreamDescription) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -227,3 +273,35 @@ resource "aws_kinesis_stream" "test_stream" { } } ` + +var testAccKinesisStreamConfigAllShardLevelMetrics = ` +resource "aws_kinesis_stream" "test_stream" { + name = "terraform-kinesis-test-%d" + shard_count = 2 + tags { + Name = "tf-test" + } + shard_level_metrics = [ + "IncomingBytes", + "IncomingRecords", + "OutgoingBytes", + "OutgoingRecords", + "WriteProvisionedThroughputExceeded", + "ReadProvisionedThroughputExceeded", + "IteratorAgeMilliseconds" + ] +} +` + +var testAccKinesisStreamConfigSingleShardLevelMetric = ` +resource "aws_kinesis_stream" "test_stream" { + name = "terraform-kinesis-test-%d" + shard_count = 2 + tags { + Name = "tf-test" + } + shard_level_metrics = [ + "IncomingBytes" + ] +} +` diff --git a/builtin/providers/aws/resource_aws_rds_cluster_instance.go b/builtin/providers/aws/resource_aws_rds_cluster_instance.go index 7956656850ec..745674c43281 100644 --- a/builtin/providers/aws/resource_aws_rds_cluster_instance.go +++ b/builtin/providers/aws/resource_aws_rds_cluster_instance.go @@ -205,6 +205,7 @@ func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) d.Set("cluster_identifier", db.DBClusterIdentifier) d.Set("instance_class", db.DBInstanceClass) d.Set("identifier", db.DBInstanceIdentifier) + d.Set("storage_encrypted", db.StorageEncrypted) if len(db.DBParameterGroups) > 0 { d.Set("db_parameter_group_name", db.DBParameterGroups[0].DBParameterGroupName) diff --git a/builtin/providers/aws/resource_aws_redshift_cluster.go b/builtin/providers/aws/resource_aws_redshift_cluster.go index 68651c28b432..6ca041389442 100644 --- a/builtin/providers/aws/resource_aws_redshift_cluster.go +++ b/builtin/providers/aws/resource_aws_redshift_cluster.go @@ -373,7 +373,7 @@ func resourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) er } else { d.Set("cluster_type", "single-node") } - d.Set("number_of_nodes", len(rsc.ClusterNodes)) + d.Set("number_of_nodes", rsc.NumberOfNodes) d.Set("publicly_accessible", rsc.PubliclyAccessible) var vpcg []string diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index 41464a5d3762..cdedb27ca062 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -21,6 +21,7 @@ import ( "github.com/aws/aws-sdk-go/service/elasticbeanstalk" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" + "github.com/aws/aws-sdk-go/service/kinesis" "github.com/aws/aws-sdk-go/service/lambda" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/redshift" @@ -1005,6 +1006,17 @@ func flattenAsgEnabledMetrics(list []*autoscaling.EnabledMetric) []string { return strs } +func flattenKinesisShardLevelMetrics(list []*kinesis.EnhancedMetrics) []string { + if len(list) == 0 { + return []string{} + } + strs := make([]string, 0, len(list[0].ShardLevelMetrics)) + for _, s := range list[0].ShardLevelMetrics { + strs = append(strs, *s) + } + return strs +} + func flattenApiGatewayStageKeys(keys []*string) []map[string]interface{} { stageKeys := make([]map[string]interface{}, 0, len(keys)) for _, o := range keys { diff --git a/builtin/providers/aws/structure_test.go b/builtin/providers/aws/structure_test.go index 0ac0a73dcfae..937411af1651 100644 --- a/builtin/providers/aws/structure_test.go +++ b/builtin/providers/aws/structure_test.go @@ -11,6 +11,7 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/aws/aws-sdk-go/service/elb" + "github.com/aws/aws-sdk-go/service/kinesis" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/redshift" "github.com/aws/aws-sdk-go/service/route53" @@ -839,6 +840,27 @@ func TestFlattenAsgEnabledMetrics(t *testing.T) { } } +func TestFlattenKinesisShardLevelMetrics(t *testing.T) { + expanded := []*kinesis.EnhancedMetrics{ + &kinesis.EnhancedMetrics{ + ShardLevelMetrics: []*string{ + aws.String("IncomingBytes"), + aws.String("IncomingRecords"), + }, + }, + } + result := flattenKinesisShardLevelMetrics(expanded) + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + if result[0] != "IncomingBytes" { + t.Fatalf("expected element 0 to be IncomingBytes, but was %s", result[0]) + } + if result[1] != "IncomingRecords" { + t.Fatalf("expected element 0 to be IncomingRecords, but was %s", result[1]) + } +} + func TestFlattenSecurityGroups(t *testing.T) { cases := []struct { ownerId *string diff --git a/builtin/providers/azurerm/config.go b/builtin/providers/azurerm/config.go index 50e9c89f409a..e44a9222ab9c 100644 --- a/builtin/providers/azurerm/config.go +++ b/builtin/providers/azurerm/config.go @@ -92,13 +92,7 @@ func withRequestLogging() autorest.SendDecorator { } func setUserAgent(client *autorest.Client) { - var version string - if terraform.VersionPrerelease != "" { - version = fmt.Sprintf("%s-%s", terraform.Version, terraform.VersionPrerelease) - } else { - version = terraform.Version - } - + version := terraform.VersionString() client.UserAgent = fmt.Sprintf("HashiCorp-Terraform-v%s", version) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_network.go b/builtin/providers/cloudstack/resource_cloudstack_network.go index 1fccdfd558ee..2b58b5befe25 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network.go @@ -25,6 +25,8 @@ func resourceCloudStackNetwork() *schema.Resource { if value == none { aclidSchema.ForceNew = true + } else { + aclidSchema.ForceNew = false } return value diff --git a/builtin/providers/fastly/resource_fastly_service_v1.go b/builtin/providers/fastly/resource_fastly_service_v1.go index e61802c55d19..348170a0e1d8 100644 --- a/builtin/providers/fastly/resource_fastly_service_v1.go +++ b/builtin/providers/fastly/resource_fastly_service_v1.go @@ -1566,7 +1566,7 @@ func buildHeader(headerMap interface{}) (*gofastly.CreateHeaderInput, error) { df := headerMap.(map[string]interface{}) opts := gofastly.CreateHeaderInput{ Name: df["name"].(string), - IgnoreIfSet: df["ignore_if_set"].(bool), + IgnoreIfSet: gofastly.Compatibool(df["ignore_if_set"].(bool)), Destination: df["destination"].(string), Priority: uint(df["priority"].(int)), Source: df["source"].(string), diff --git a/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go b/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go index 306de61f458f..7c59470e859d 100644 --- a/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go +++ b/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go @@ -181,10 +181,11 @@ resource "fastly_service_v1" "foo" { } header { - destination = "http.Server" - type = "cache" - action = "delete" - name = "remove s3 server" + destination = "http.Server" + type = "cache" + action = "delete" + name = "remove s3 server" + ignore_if_set = "true" } force_destroy = true diff --git a/builtin/providers/google/config.go b/builtin/providers/google/config.go index 159a57e09326..c824c9ee6b4f 100644 --- a/builtin/providers/google/config.go +++ b/builtin/providers/google/config.go @@ -85,11 +85,7 @@ func (c *Config) loadAndValidate() error { } } - versionString := terraform.Version - prerelease := terraform.VersionPrerelease - if len(prerelease) > 0 { - versionString = fmt.Sprintf("%s-%s", versionString, prerelease) - } + versionString := terraform.VersionString() userAgent := fmt.Sprintf( "(%s %s) Terraform/%s", runtime.GOOS, runtime.GOARCH, versionString) diff --git a/builtin/provisioners/chef/linux_provisioner_test.go b/builtin/provisioners/chef/linux_provisioner_test.go index ec72f7debf67..c338405839dc 100644 --- a/builtin/provisioners/chef/linux_provisioner_test.go +++ b/builtin/provisioners/chef/linux_provisioner_test.go @@ -220,6 +220,7 @@ func TestResourceProvider_linuxCreateConfigFiles(t *testing.T) { "run_list": []interface{}{"cookbook::recipe"}, "secret_key_path": "test-fixtures/encrypted_data_bag_secret", "server_url": "https://chef.local", + "ssl_verify_mode": "verify_none", "validation_client_name": "validator", "validation_key_path": "test-fixtures/validator.pem", }), @@ -340,20 +341,15 @@ chef_server_url "https://chef.local" validation_client_name "validator" node_name "nodename1" - - - http_proxy "http://proxy.local" ENV['http_proxy'] = "http://proxy.local" ENV['HTTP_PROXY'] = "http://proxy.local" - - https_proxy "https://proxy.local" ENV['https_proxy'] = "https://proxy.local" ENV['HTTPS_PROXY'] = "https://proxy.local" - - no_proxy "http://local.local,https://local.local" -ENV['no_proxy'] = "http://local.local,https://local.local"` +ENV['no_proxy'] = "http://local.local,https://local.local" + +ssl_verify_mode :verify_none` diff --git a/builtin/provisioners/chef/resource_provisioner.go b/builtin/provisioners/chef/resource_provisioner.go index d4c05752972e..276c4d3af3db 100644 --- a/builtin/provisioners/chef/resource_provisioner.go +++ b/builtin/provisioners/chef/resource_provisioner.go @@ -43,35 +43,40 @@ log_location STDOUT chef_server_url "{{ .ServerURL }}" validation_client_name "{{ .ValidationClientName }}" node_name "{{ .NodeName }}" - {{ if .UsePolicyfile }} use_policyfile true policy_group "{{ .PolicyGroup }}" policy_name "{{ .PolicyName }}" -{{ end }} +{{ end -}} {{ if .HTTPProxy }} http_proxy "{{ .HTTPProxy }}" ENV['http_proxy'] = "{{ .HTTPProxy }}" ENV['HTTP_PROXY'] = "{{ .HTTPProxy }}" -{{ end }} +{{ end -}} {{ if .HTTPSProxy }} https_proxy "{{ .HTTPSProxy }}" ENV['https_proxy'] = "{{ .HTTPSProxy }}" ENV['HTTPS_PROXY'] = "{{ .HTTPSProxy }}" -{{ end }} +{{ end -}} {{ if .NOProxy }} no_proxy "{{ join .NOProxy "," }}" ENV['no_proxy'] = "{{ join .NOProxy "," }}" -{{ end }} +{{ end -}} -{{ if .SSLVerifyMode }}ssl_verify_mode {{ .SSLVerifyMode }}{{ end }} +{{ if .SSLVerifyMode }} +ssl_verify_mode {{ .SSLVerifyMode }} +{{- end -}} -{{ if .DisableReporting }}enable_reporting false{{ end }} +{{ if .DisableReporting }} +enable_reporting false +{{ end -}} -{{ if .ClientOptions }}{{ join .ClientOptions "\n" }}{{ end }} +{{ if .ClientOptions }} +{{ join .ClientOptions "\n" }} +{{ end }} ` // Provisioner represents a specificly configured chef provisioner @@ -452,6 +457,11 @@ func (p *Provisioner) deployConfigFiles( } } + // Make sure the SSLVerifyMode value is written as a symbol + if p.SSLVerifyMode != "" && !strings.HasPrefix(p.SSLVerifyMode, ":") { + p.SSLVerifyMode = fmt.Sprintf(":%s", p.SSLVerifyMode) + } + // Make strings.Join available for use within the template funcMap := template.FuncMap{ "join": strings.Join, diff --git a/builtin/provisioners/chef/windows_provisioner_test.go b/builtin/provisioners/chef/windows_provisioner_test.go index 8dd0dee28edc..18a9b44d9221 100644 --- a/builtin/provisioners/chef/windows_provisioner_test.go +++ b/builtin/provisioners/chef/windows_provisioner_test.go @@ -137,6 +137,7 @@ func TestResourceProvider_windowsCreateConfigFiles(t *testing.T) { "run_list": []interface{}{"cookbook::recipe"}, "secret_key_path": "test-fixtures/encrypted_data_bag_secret", "server_url": "https://chef.local", + "ssl_verify_mode": "verify_none", "validation_client_name": "validator", "validation_key_path": "test-fixtures/validator.pem", }), @@ -366,20 +367,15 @@ chef_server_url "https://chef.local" validation_client_name "validator" node_name "nodename1" - - - http_proxy "http://proxy.local" ENV['http_proxy'] = "http://proxy.local" ENV['HTTP_PROXY'] = "http://proxy.local" - - https_proxy "https://proxy.local" ENV['https_proxy'] = "https://proxy.local" ENV['HTTPS_PROXY'] = "https://proxy.local" - - no_proxy "http://local.local,https://local.local" -ENV['no_proxy'] = "http://local.local,https://local.local"` +ENV['no_proxy'] = "http://local.local,https://local.local" + +ssl_verify_mode :verify_none` diff --git a/command/push.go b/command/push.go index 67cac67403f2..1abe13e0b56e 100644 --- a/command/push.go +++ b/command/push.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/atlas-go/archive" "github.com/hashicorp/atlas-go/v1" + "github.com/hashicorp/terraform/terraform" ) type PushCommand struct { @@ -126,6 +127,8 @@ func (c *PushCommand) Run(args []string) int { } } + client.DefaultHeader.Set(terraform.VersionHeader, terraform.Version) + if atlasToken != "" { client.Token = atlasToken } diff --git a/helper/resource/state.go b/helper/resource/state.go index 7f8680fed668..afa758699384 100644 --- a/helper/resource/state.go +++ b/helper/resource/state.go @@ -26,6 +26,7 @@ type StateChangeConf struct { Target []string // Target state Timeout time.Duration // The amount of time to wait before timeout MinTimeout time.Duration // Smallest time to wait before refreshes + PollInterval time.Duration // Override MinTimeout/backoff and only poll this often NotFoundChecks int // Number of times to allow not found // This is to work around inconsistent APIs @@ -72,14 +73,20 @@ func (conf *StateChangeConf) WaitForState() (interface{}, error) { time.Sleep(conf.Delay) var err error + var wait time.Duration for tries := 0; ; tries++ { // Wait between refreshes using an exponential backoff - wait := time.Duration(math.Pow(2, float64(tries))) * - 100 * time.Millisecond - if wait < conf.MinTimeout { - wait = conf.MinTimeout - } else if wait > 10*time.Second { - wait = 10 * time.Second + // If a poll interval has been specified, choose that interval + if conf.PollInterval > 0 && conf.PollInterval < 180*time.Second { + wait = conf.PollInterval + } else { + wait = time.Duration(math.Pow(2, float64(tries))) * + 100 * time.Millisecond + if wait < conf.MinTimeout { + wait = conf.MinTimeout + } else if wait > 10*time.Second { + wait = 10 * time.Second + } } log.Printf("[TRACE] Waiting %s before next try", wait) diff --git a/terraform/version.go b/terraform/version.go index e781d9c259aa..d753ff8e78e9 100644 --- a/terraform/version.go +++ b/terraform/version.go @@ -1,6 +1,8 @@ package terraform import ( + "fmt" + "github.com/hashicorp/go-version" ) @@ -16,3 +18,14 @@ const VersionPrerelease = "dev" // benefit of verifying during tests and init time that our version is a // proper semantic version, which should always be the case. var SemVersion = version.Must(version.NewVersion(Version)) + +// VersionHeader is the header name used to send the current terraform version +// in http requests. +const VersionHeader = "Terraform-Version" + +func VersionString() string { + if VersionPrerelease != "" { + return fmt.Sprintf("%s-%s", Version, VersionPrerelease) + } + return Version +} diff --git a/vendor/github.com/ajg/form/.travis.yml b/vendor/github.com/ajg/form/.travis.yml deleted file mode 100644 index b257361d8b2b..000000000000 --- a/vendor/github.com/ajg/form/.travis.yml +++ /dev/null @@ -1,24 +0,0 @@ -## Copyright 2014 Alvaro J. Genial. All rights reserved. -## Use of this source code is governed by a BSD-style -## license that can be found in the LICENSE file. - -language: go - -go: - - tip - - 1.3 - # - 1.2 - # Note: 1.2 is disabled because it seems to require that cover - # be installed from code.google.com/p/go.tools/cmd/cover - -before_install: - - go get -v golang.org/x/tools/cmd/cover - - go get -v golang.org/x/tools/cmd/vet - - go get -v github.com/golang/lint/golint - - export PATH=$PATH:/home/travis/gopath/bin - -script: - - go build -v ./... - - go test -v -cover ./... - - go vet ./... - - golint . diff --git a/vendor/github.com/ajg/form/README.md b/vendor/github.com/ajg/form/README.md index de3ab635c3b9..7117f48121be 100644 --- a/vendor/github.com/ajg/form/README.md +++ b/vendor/github.com/ajg/form/README.md @@ -171,10 +171,47 @@ Now any value with type `Binary` will automatically be encoded using the [URL](h Keys ---- -In theory any value can be a key as long as it has a string representation. However, periods have special meaning to `form`, and thus, under the hood (i.e. in encoded form) they are transparently escaped using a preceding backslash (`\`). Backslashes within keys, themselves, are also escaped in this manner (e.g. as `\\`) in order to permit representing `\.` itself (as `\\\.`). +In theory any value can be a key as long as it has a string representation. However, by default, periods have special meaning to `form`, and thus, under the hood (i.e. in encoded form) they are transparently escaped using a preceding backslash (`\`). Backslashes within keys, themselves, are also escaped in this manner (e.g. as `\\`) in order to permit representing `\.` itself (as `\\\.`). (Note: it is normally unnecessary to deal with this issue unless keys are being constructed manually—e.g. literally embedded in HTML or in a URI.) +The default delimiter and escape characters used for encoding and decoding composite keys can be changed using the `DelimitWith` and `EscapeWith` setter methods of `Encoder` and `Decoder`, respectively. For example... + +```go +package main + +import ( + "os" + + "github.com/ajg/form" +) + +func main() { + type B struct { + Qux string `form:"qux"` + } + type A struct { + FooBar B `form:"foo.bar"` + } + a := A{FooBar: B{"XYZ"}} + os.Stdout.WriteString("Default: ") + form.NewEncoder(os.Stdout).Encode(a) + os.Stdout.WriteString("\nCustom: ") + form.NewEncoder(os.Stdout).DelimitWith('/').Encode(a) + os.Stdout.WriteString("\n") +} + +``` + +...will produce... + +``` +Default: foo%5C.bar.qux=XYZ +Custom: foo.bar%2Fqux=XYZ +``` + +(`%5C` and `%2F` represent `\` and `/`, respectively.) + Limitations ----------- diff --git a/vendor/github.com/ajg/form/TODO.md b/vendor/github.com/ajg/form/TODO.md new file mode 100644 index 000000000000..672fd4657fa8 --- /dev/null +++ b/vendor/github.com/ajg/form/TODO.md @@ -0,0 +1,5 @@ +TODO +==== + + - Document IgnoreCase and IgnoreUnknownKeys in README. + - Fix want/have newlines in tests. diff --git a/vendor/github.com/ajg/form/decode.go b/vendor/github.com/ajg/form/decode.go index d03b2082c765..3346fffe5a78 100644 --- a/vendor/github.com/ajg/form/decode.go +++ b/vendor/github.com/ajg/form/decode.go @@ -16,12 +16,28 @@ import ( // NewDecoder returns a new form decoder. func NewDecoder(r io.Reader) *decoder { - return &decoder{r} + return &decoder{r, defaultDelimiter, defaultEscape, false, false} } // decoder decodes data from a form (application/x-www-form-urlencoded). type decoder struct { - r io.Reader + r io.Reader + d rune + e rune + ignoreUnknown bool + ignoreCase bool +} + +// DelimitWith sets r as the delimiter used for composite keys by decoder d and returns the latter; it is '.' by default. +func (d *decoder) DelimitWith(r rune) *decoder { + d.d = r + return d +} + +// EscapeWith sets r as the escape used for delimiters (and to escape itself) by decoder d and returns the latter; it is '\\' by default. +func (d *decoder) EscapeWith(r rune) *decoder { + d.e = r + return d } // Decode reads in and decodes form-encoded data into dst. @@ -35,26 +51,48 @@ func (d decoder) Decode(dst interface{}) error { return err } v := reflect.ValueOf(dst) - return decodeNode(v, parseValues(vs, canIndexOrdinally(v))) + return d.decodeNode(v, parseValues(d.d, d.e, vs, canIndexOrdinally(v))) +} + +// IgnoreUnknownKeys if set to true it will make the decoder ignore values +// that are not found in the destination object instead of returning an error. +func (d *decoder) IgnoreUnknownKeys(ignoreUnknown bool) { + d.ignoreUnknown = ignoreUnknown +} + +// IgnoreCase if set to true it will make the decoder try to set values in the +// destination object even if the case does not match. +func (d *decoder) IgnoreCase(ignoreCase bool) { + d.ignoreCase = ignoreCase } // DecodeString decodes src into dst. -func DecodeString(dst interface{}, src string) error { +func (d decoder) DecodeString(dst interface{}, src string) error { vs, err := url.ParseQuery(src) if err != nil { return err } v := reflect.ValueOf(dst) - return decodeNode(v, parseValues(vs, canIndexOrdinally(v))) + return d.decodeNode(v, parseValues(d.d, d.e, vs, canIndexOrdinally(v))) } // DecodeValues decodes vs into dst. -func DecodeValues(dst interface{}, vs url.Values) error { +func (d decoder) DecodeValues(dst interface{}, vs url.Values) error { v := reflect.ValueOf(dst) - return decodeNode(v, parseValues(vs, canIndexOrdinally(v))) + return d.decodeNode(v, parseValues(d.d, d.e, vs, canIndexOrdinally(v))) } -func decodeNode(v reflect.Value, n node) (err error) { +// DecodeString decodes src into dst. +func DecodeString(dst interface{}, src string) error { + return NewDecoder(nil).DecodeString(dst, src) +} + +// DecodeValues decodes vs into dst. +func DecodeValues(dst interface{}, vs url.Values) error { + return NewDecoder(nil).DecodeValues(dst, vs) +} + +func (d decoder) decodeNode(v reflect.Value, n node) (err error) { defer func() { if e := recover(); e != nil { err = fmt.Errorf("%v", e) @@ -64,11 +102,11 @@ func decodeNode(v reflect.Value, n node) (err error) { if v.Kind() == reflect.Slice { return fmt.Errorf("could not decode directly into slice; use pointer to slice") } - decodeValue(v, n) + d.decodeValue(v, n) return nil } -func decodeValue(v reflect.Value, x interface{}) { +func (d decoder) decodeValue(v reflect.Value, x interface{}) { t := v.Type() k := v.Kind() @@ -84,11 +122,11 @@ func decodeValue(v reflect.Value, x interface{}) { switch k { case reflect.Ptr: - decodeValue(v.Elem(), x) + d.decodeValue(v.Elem(), x) return case reflect.Interface: if !v.IsNil() { - decodeValue(v.Elem(), x) + d.decodeValue(v.Elem(), x) return } else if empty { @@ -106,48 +144,50 @@ func decodeValue(v reflect.Value, x interface{}) { switch k { case reflect.Struct: if t.ConvertibleTo(timeType) { - decodeTime(v, x) + d.decodeTime(v, x) } else if t.ConvertibleTo(urlType) { - decodeURL(v, x) + d.decodeURL(v, x) } else { - decodeStruct(v, x) + d.decodeStruct(v, x) } case reflect.Slice: - decodeSlice(v, x) + d.decodeSlice(v, x) case reflect.Array: - decodeArray(v, x) + d.decodeArray(v, x) case reflect.Map: - decodeMap(v, x) + d.decodeMap(v, x) case reflect.Invalid, reflect.Uintptr, reflect.UnsafePointer, reflect.Chan, reflect.Func: panic(t.String() + " has unsupported kind " + k.String()) default: - decodeBasic(v, x) + d.decodeBasic(v, x) } } -func decodeStruct(v reflect.Value, x interface{}) { +func (d decoder) decodeStruct(v reflect.Value, x interface{}) { t := v.Type() for k, c := range getNode(x) { - if f, ok := findField(v, k); !ok && k == "" { + if f, ok := findField(v, k, d.ignoreCase); !ok && k == "" { panic(getString(x) + " cannot be decoded as " + t.String()) } else if !ok { - panic(k + " doesn't exist in " + t.String()) + if !d.ignoreUnknown { + panic(k + " doesn't exist in " + t.String()) + } } else if !f.CanSet() { panic(k + " cannot be set in " + t.String()) } else { - decodeValue(f, c) + d.decodeValue(f, c) } } } -func decodeMap(v reflect.Value, x interface{}) { +func (d decoder) decodeMap(v reflect.Value, x interface{}) { t := v.Type() if v.IsNil() { v.Set(reflect.MakeMap(t)) } for k, c := range getNode(x) { i := reflect.New(t.Key()).Elem() - decodeValue(i, k) + d.decodeValue(i, k) w := v.MapIndex(i) if w.IsValid() { // We have an actual element value to decode into. @@ -171,12 +211,12 @@ func decodeMap(v reflect.Value, x interface{}) { } } - decodeValue(w, c) + d.decodeValue(w, c) v.SetMapIndex(i, w) } } -func decodeArray(v reflect.Value, x interface{}) { +func (d decoder) decodeArray(v reflect.Value, x interface{}) { t := v.Type() for k, c := range getNode(x) { i, err := strconv.Atoi(k) @@ -186,11 +226,11 @@ func decodeArray(v reflect.Value, x interface{}) { if l := v.Len(); i >= l { panic("index is above array size") } - decodeValue(v.Index(i), c) + d.decodeValue(v.Index(i), c) } } -func decodeSlice(v reflect.Value, x interface{}) { +func (d decoder) decodeSlice(v reflect.Value, x interface{}) { t := v.Type() if t.Elem().Kind() == reflect.Uint8 { // Allow, but don't require, byte slices to be encoded as a single string. @@ -221,11 +261,11 @@ func decodeSlice(v reflect.Value, x interface{}) { delta := i - l + 1 v.Set(reflect.AppendSlice(v, reflect.MakeSlice(t, delta, delta))) } - decodeValue(v.Index(i), c) + d.decodeValue(v.Index(i), c) } } -func decodeBasic(v reflect.Value, x interface{}) { +func (d decoder) decodeBasic(v reflect.Value, x interface{}) { t := v.Type() switch k, s := t.Kind(), getString(x); k { case reflect.Bool: @@ -276,7 +316,7 @@ func decodeBasic(v reflect.Value, x interface{}) { } } -func decodeTime(v reflect.Value, x interface{}) { +func (d decoder) decodeTime(v reflect.Value, x interface{}) { t := v.Type() s := getString(x) // TODO: Find a more efficient way to do this. @@ -289,7 +329,7 @@ func decodeTime(v reflect.Value, x interface{}) { panic("cannot decode string `" + s + "` as " + t.String()) } -func decodeURL(v reflect.Value, x interface{}) { +func (d decoder) decodeURL(v reflect.Value, x interface{}) { t := v.Type() s := getString(x) if u, err := url.Parse(s); err == nil { diff --git a/vendor/github.com/ajg/form/encode.go b/vendor/github.com/ajg/form/encode.go index 4c6f6c869d4e..3e824c6c677c 100644 --- a/vendor/github.com/ajg/form/encode.go +++ b/vendor/github.com/ajg/form/encode.go @@ -18,12 +18,26 @@ import ( // NewEncoder returns a new form encoder. func NewEncoder(w io.Writer) *encoder { - return &encoder{w} + return &encoder{w, defaultDelimiter, defaultEscape} } // encoder provides a way to encode to a Writer. type encoder struct { w io.Writer + d rune + e rune +} + +// DelimitWith sets r as the delimiter used for composite keys by encoder e and returns the latter; it is '.' by default. +func (e *encoder) DelimitWith(r rune) *encoder { + e.d = r + return e +} + +// EscapeWith sets r as the escape used for delimiters (and to escape itself) by encoder e and returns the latter; it is '\\' by default. +func (e *encoder) EscapeWith(r rune) *encoder { + e.e = r + return e } // Encode encodes dst as form and writes it out using the encoder's Writer. @@ -33,7 +47,7 @@ func (e encoder) Encode(dst interface{}) error { if err != nil { return err } - s := n.Values().Encode() + s := n.values(e.d, e.e).Encode() l, err := io.WriteString(e.w, s) switch { case err != nil: @@ -51,7 +65,8 @@ func EncodeToString(dst interface{}) (string, error) { if err != nil { return "", err } - return n.Values().Encode(), nil + vs := n.values(defaultDelimiter, defaultEscape) + return vs.Encode(), nil } // EncodeToValues encodes dst as a form and returns it as Values. @@ -61,7 +76,8 @@ func EncodeToValues(dst interface{}) (url.Values, error) { if err != nil { return nil, err } - return n.Values(), nil + vs := n.values(defaultDelimiter, defaultEscape) + return vs, nil } func encodeToNode(v reflect.Value) (n node, err error) { @@ -258,9 +274,16 @@ func fieldInfo(f reflect.StructField) (k string, oe bool) { return k, oe } -func findField(v reflect.Value, n string) (reflect.Value, bool) { +func findField(v reflect.Value, n string, ignoreCase bool) (reflect.Value, bool) { t := v.Type() l := v.NumField() + + var lowerN string + caseInsensitiveMatch := -1 + if ignoreCase { + lowerN = strings.ToLower(n) + } + // First try named fields. for i := 0; i < l; i++ { f := t.Field(i) @@ -269,9 +292,16 @@ func findField(v reflect.Value, n string) (reflect.Value, bool) { continue } else if n == k { return v.Field(i), true + } else if ignoreCase && lowerN == strings.ToLower(k) { + caseInsensitiveMatch = i } } + // If no exact match was found try case insensitive match. + if caseInsensitiveMatch != -1 { + return v.Field(caseInsensitiveMatch), true + } + // Then try anonymous (embedded) fields. for i := 0; i < l; i++ { f := t.Field(i) @@ -289,7 +319,7 @@ func findField(v reflect.Value, n string) (reflect.Value, bool) { if fk != reflect.Struct { continue } - if ev, ok := findField(fv, n); ok { + if ev, ok := findField(fv, n, ignoreCase); ok { return ev, true } } diff --git a/vendor/github.com/ajg/form/form.go b/vendor/github.com/ajg/form/form.go index 7c74f3d57735..4052369cfebc 100644 --- a/vendor/github.com/ajg/form/form.go +++ b/vendor/github.com/ajg/form/form.go @@ -8,4 +8,7 @@ package form const ( implicitKey = "_" omittedKey = "-" + + defaultDelimiter = '.' + defaultEscape = '\\' ) diff --git a/vendor/github.com/ajg/form/node.go b/vendor/github.com/ajg/form/node.go index e4a04e5bdd41..567aaafde129 100644 --- a/vendor/github.com/ajg/form/node.go +++ b/vendor/github.com/ajg/form/node.go @@ -12,19 +12,19 @@ import ( type node map[string]interface{} -func (n node) Values() url.Values { +func (n node) values(d, e rune) url.Values { vs := url.Values{} - n.merge("", &vs) + n.merge(d, e, "", &vs) return vs } -func (n node) merge(p string, vs *url.Values) { +func (n node) merge(d, e rune, p string, vs *url.Values) { for k, x := range n { switch y := x.(type) { case string: - vs.Add(p+escape(k), y) + vs.Add(p+escape(d, e, k), y) case node: - y.merge(p+escape(k)+".", vs) + y.merge(d, e, p+escape(d, e, k)+string(d), vs) default: panic("value is neither string nor node") } @@ -32,7 +32,7 @@ func (n node) merge(p string, vs *url.Values) { } // TODO: Add tests for implicit indexing. -func parseValues(vs url.Values, canIndexFirstLevelOrdinally bool) node { +func parseValues(d, e rune, vs url.Values, canIndexFirstLevelOrdinally bool) node { // NOTE: Because of the flattening of potentially multiple strings to one key, implicit indexing works: // i. At the first level; e.g. Foo.Bar=A&Foo.Bar=B becomes 0.Foo.Bar=A&1.Foo.Bar=B // ii. At the last level; e.g. Foo.Bar._=A&Foo.Bar._=B becomes Foo.Bar.0=A&Foo.Bar.1=B @@ -41,11 +41,11 @@ func parseValues(vs url.Values, canIndexFirstLevelOrdinally bool) node { m := map[string]string{} for k, ss := range vs { - indexLastLevelOrdinally := strings.HasSuffix(k, "."+implicitKey) + indexLastLevelOrdinally := strings.HasSuffix(k, string(d)+implicitKey) for i, s := range ss { if canIndexFirstLevelOrdinally { - k = strconv.Itoa(i) + "." + k + k = strconv.Itoa(i) + string(d) + k } else if indexLastLevelOrdinally { k = strings.TrimSuffix(k, implicitKey) + strconv.Itoa(i) } @@ -56,28 +56,28 @@ func parseValues(vs url.Values, canIndexFirstLevelOrdinally bool) node { n := node{} for k, s := range m { - n = n.split(k, s) + n = n.split(d, e, k, s) } return n } -func splitPath(path string) (k, rest string) { +func splitPath(d, e rune, path string) (k, rest string) { esc := false for i, r := range path { switch { - case !esc && r == '\\': + case !esc && r == e: esc = true - case !esc && r == '.': - return unescape(path[:i]), path[i+1:] + case !esc && r == d: + return unescape(d, e, path[:i]), path[i+1:] default: esc = false } } - return unescape(path), "" + return unescape(d, e, path), "" } -func (n node) split(path, s string) node { - k, rest := splitPath(path) +func (n node) split(d, e rune, path, s string) node { + k, rest := splitPath(d, e, path) if rest == "" { return add(n, k, s) } @@ -86,7 +86,7 @@ func (n node) split(path, s string) node { } c := getNode(n[k]) - n[k] = c.split(rest, s) + n[k] = c.split(d, e, rest, s) return n } @@ -139,10 +139,14 @@ func getString(x interface{}) string { panic("value is neither string nor node") } -func escape(s string) string { - return strings.Replace(strings.Replace(s, `\`, `\\`, -1), `.`, `\.`, -1) +func escape(d, e rune, s string) string { + s = strings.Replace(s, string(e), string(e)+string(e), -1) // Escape the escape (\ => \\) + s = strings.Replace(s, string(d), string(e)+string(d), -1) // Escape the delimiter (. => \.) + return s } -func unescape(s string) string { - return strings.Replace(strings.Replace(s, `\.`, `.`, -1), `\\`, `\`, -1) +func unescape(d, e rune, s string) string { + s = strings.Replace(s, string(e)+string(d), string(d), -1) // Unescape the delimiter (\. => .) + s = strings.Replace(s, string(e)+string(e), string(e), -1) // Unescape the escape (\\ => \) + return s } diff --git a/vendor/github.com/ajg/form/pre-commit.sh b/vendor/github.com/ajg/form/pre-commit.sh old mode 100644 new mode 100755 diff --git a/vendor/github.com/hashicorp/atlas-go/v1/client.go b/vendor/github.com/hashicorp/atlas-go/v1/client.go index 2e61e064b277..b5ee211a3743 100644 --- a/vendor/github.com/hashicorp/atlas-go/v1/client.go +++ b/vendor/github.com/hashicorp/atlas-go/v1/client.go @@ -70,6 +70,10 @@ type Client struct { // HTTPClient is the underlying http client with which to make requests. HTTPClient *http.Client + + // DefaultHeaders is a set of headers that will be added to every request. + // This minimally includes the atlas user-agent string. + DefaultHeader http.Header } // DefaultClient returns a client that connects to the Atlas API. @@ -108,10 +112,13 @@ func NewClient(urlString string) (*Client, error) { } client := &Client{ - URL: parsedURL, - Token: token, + URL: parsedURL, + Token: token, + DefaultHeader: make(http.Header), } + client.DefaultHeader.Set("User-Agent", userAgent) + if err := client.init(); err != nil { return nil, err } @@ -227,10 +234,12 @@ func (c *Client) rawRequest(verb string, u *url.URL, ro *RequestOptions) (*http. return nil, err } - // Set the User-Agent - request.Header.Set("User-Agent", userAgent) + // set our default headers first + for k, v := range c.DefaultHeader { + request.Header[k] = v + } - // Add any headers (auth will be here if set) + // Add any request headers (auth will be here if set) for k, v := range ro.Headers { request.Header.Add(k, v) } diff --git a/vendor/github.com/sethvargo/go-fastly/.gitignore b/vendor/github.com/sethvargo/go-fastly/.gitignore deleted file mode 100644 index c9bf9732767d..000000000000 --- a/vendor/github.com/sethvargo/go-fastly/.gitignore +++ /dev/null @@ -1,28 +0,0 @@ -### Go ### -# Compiled Object files, Static and Dynamic libs (Shared Objects) -*.o -*.a -*.so - -# Folders -_obj -_test - -# Architecture specific extensions/prefixes -*.[568vq] -[568vq].out - -*.cgo1.go -*.cgo2.c -_cgo_defun.c -_cgo_gotypes.go -_cgo_export.* - -_testmain.go - -*.exe -*.test -*.prof - -bin/ -pkg/ diff --git a/vendor/github.com/sethvargo/go-fastly/.travis.yml b/vendor/github.com/sethvargo/go-fastly/.travis.yml deleted file mode 100644 index 9330e8bd61b7..000000000000 --- a/vendor/github.com/sethvargo/go-fastly/.travis.yml +++ /dev/null @@ -1,19 +0,0 @@ -sudo: false - -language: go - -go: - - 1.4.2 - - 1.5.2 - -branches: - only: - - master - -script: - - make updatedeps - - make test - -env: - # FASTLY_API_KEY - - secure: "eiYcogJFF+lK/6coFXaOOm0bDHxaK1qqZ0GinMmPXmQ6nonf56omMVxNyOsV+6jz/fdJCA7gfGv600raTAOVNxD23E/p2j6yxPSI5O6itxp0jxJm7p6MOHwkmsXFZGfLxaqVN2CHs+W3sSc4cwzCkCqlik4XLXihHvzYpjBk1AZK6QUMWqdTcDYDWMfk5CW1O6wUpmYiFwlnENwDopGQlSs1+PyEiDEbEMYu1yVUq+f83IJ176arM4XL8NS2GN1QMBKyALA+6jpT/OrFtW5tkheE+WOQ6+/ZnDCtY0i1RA8BBuyACYuf+WEAkmWfJGGk7+Ou6q2JFzIBsd6ZS3EsM4bs4P1FyhPBwK5zyFty2w7+PwVm6wrZ0NfUh6BKsfCF9MweypsKq+F+4GOcpjdCYPKZKGRjQ4iKOZVVzaVGLRanz1EHiXUcLT+DDr0kFbvrLCLqCPvujBfqeUDqVZMpsUqir9HWqVKutczAnYzFaoeeSVap14J/sd6kcgZo2bNMSRQvMoPCOvicdW8RLIV8Hyx2l0Cv596ZfinWBk2Dcmn6APLkbrBpvhv6/SUtBKHMijjFc5VvoxO3ZP6vUCueDaZVNWkX1xk+VA5PD0T/IcilLy3+nBedz+3lmiW7dnQPuWnlPBFPWvYZvW2KaDOazv5rZK+pKIq32BIyhP/n/AU=" diff --git a/vendor/github.com/sethvargo/go-fastly/Makefile b/vendor/github.com/sethvargo/go-fastly/Makefile index 8391618cc0a3..2addc380e4dd 100644 --- a/vendor/github.com/sethvargo/go-fastly/Makefile +++ b/vendor/github.com/sethvargo/go-fastly/Makefile @@ -1,21 +1,42 @@ TEST?=./... +NAME?=$(shell basename "${CURDIR}") +EXTERNAL_TOOLS=\ + github.com/mitchellh/gox default: test -# test runs the test suite and vets the code +# test runs the test suite and vets the code. test: generate - go list $(TEST) | xargs -n1 go test -timeout=30s -parallel=12 $(TESTARGS) + @echo "==> Running tests..." + @go list $(TEST) \ + | grep -v "github.com/sethvargo/${NAME}/vendor" \ + | xargs -n1 go test -timeout=60s -parallel=10 ${TESTARGS} -# updatedeps installs all the dependencies the library needs to run and build +# testrace runs the race checker +testrace: generate + @echo "==> Running tests (race)..." + @go list $(TEST) \ + | grep -v "github.com/sethvargo/${NAME}/vendor" \ + | xargs -n1 go test -timeout=60s -race ${TESTARGS} + +# updatedeps installs all the dependencies needed to run and build. updatedeps: - go list ./... \ - | xargs go list -f '{{ join .Deps "\n" }}{{ printf "\n" }}{{ join .TestImports "\n" }}' \ - | grep -v github.com/sethvargo/go-fastly \ - | xargs go get -f -u -v + @sh -c "'${CURDIR}/scripts/deps.sh' '${NAME}'" -# generate runs `go generate` to build the dynamically generated source files +# generate runs `go generate` to build the dynamically generated source files. generate: - find . -type f -name '.DS_Store' -delete - go generate ./... + @echo "==> Generating..." + @find . -type f -name '.DS_Store' -delete + @go list ./... \ + | grep -v "github.com/hashicorp/${NAME}/vendor" \ + | xargs -n1 go generate + +# bootstrap installs the necessary go tools for development/build. +bootstrap: + @echo "==> Bootstrapping..." + @for t in ${EXTERNAL_TOOLS}; do \ + echo "--> Installing "$$t"..." ; \ + go get -u "$$t"; \ + done -.PHONY: default bin dev dist test testrace updatedeps generate +.PHONY: default test testrace updatedeps generate bootstrap diff --git a/vendor/github.com/sethvargo/go-fastly/cache_setting.go b/vendor/github.com/sethvargo/go-fastly/cache_setting.go index 3f5aebe24cf4..79ba5c64ccba 100644 --- a/vendor/github.com/sethvargo/go-fastly/cache_setting.go +++ b/vendor/github.com/sethvargo/go-fastly/cache_setting.go @@ -164,7 +164,7 @@ type UpdateCacheSettingInput struct { NewName string `form:"name,omitempty"` Action CacheSettingAction `form:"action,omitempty"` TTL uint `form:"ttl,omitempty"` - StateTTL uint `form:"stale_ttl,omitempty"` + StaleTTL uint `form:"stale_ttl,omitempty"` CacheCondition string `form:"cache_condition,omitempty"` } diff --git a/vendor/github.com/sethvargo/go-fastly/client.go b/vendor/github.com/sethvargo/go-fastly/client.go index 8006beecadf3..4c670601f3bd 100644 --- a/vendor/github.com/sethvargo/go-fastly/client.go +++ b/vendor/github.com/sethvargo/go-fastly/client.go @@ -1,6 +1,7 @@ package fastly import ( + "bytes" "encoding/json" "fmt" "io" @@ -144,11 +145,6 @@ func (c *Client) Request(verb, p string, ro *RequestOptions) (*http.Response, er // RequestForm makes an HTTP request with the given interface being encoded as // form data. func (c *Client) RequestForm(verb, p string, i interface{}, ro *RequestOptions) (*http.Response, error) { - values, err := form.EncodeToValues(i) - if err != nil { - return nil, err - } - if ro == nil { ro = new(RequestOptions) } @@ -158,10 +154,11 @@ func (c *Client) RequestForm(verb, p string, i interface{}, ro *RequestOptions) } ro.Headers["Content-Type"] = "application/x-www-form-urlencoded" - // There is a super-jank implementation in the form library where fields with - // a "dot" are replaced with "/.". That is then URL encoded and Fastly just - // dies. We fix that here. - body := strings.Replace(values.Encode(), "%5C.", ".", -1) + buf := new(bytes.Buffer) + if err := form.NewEncoder(buf).DelimitWith('|').Encode(i); err != nil { + return nil, err + } + body := buf.String() ro.Body = strings.NewReader(body) ro.BodyLength = int64(len(body)) diff --git a/vendor/github.com/sethvargo/go-fastly/errors.go b/vendor/github.com/sethvargo/go-fastly/errors.go index e5a617323432..82f27ed21099 100644 --- a/vendor/github.com/sethvargo/go-fastly/errors.go +++ b/vendor/github.com/sethvargo/go-fastly/errors.go @@ -79,12 +79,12 @@ type HTTPError struct { // NewHTTPError creates a new HTTP error from the given code. func NewHTTPError(resp *http.Response) *HTTPError { - var e *HTTPError + var e HTTPError if resp.Body != nil { decodeJSON(&e, resp.Body) } e.StatusCode = resp.StatusCode - return e + return &e } // Error implements the error interface and returns the string representing the diff --git a/vendor/github.com/sethvargo/go-fastly/fastly.go b/vendor/github.com/sethvargo/go-fastly/fastly.go index 438aa4ad0e74..19e7c807ddfe 100644 --- a/vendor/github.com/sethvargo/go-fastly/fastly.go +++ b/vendor/github.com/sethvargo/go-fastly/fastly.go @@ -33,9 +33,9 @@ func (b Compatibool) MarshalText() ([]byte, error) { } // UnmarshalText implements the encoding.TextUnmarshaler interface. -func (b Compatibool) UnmarshalText(t []byte) error { +func (b *Compatibool) UnmarshalText(t []byte) error { if bytes.Equal(t, []byte("1")) { - b = Compatibool(true) + *b = Compatibool(true) } return nil } diff --git a/vendor/github.com/sethvargo/go-fastly/header.go b/vendor/github.com/sethvargo/go-fastly/header.go index 476d1195ed85..96eb7b2191c6 100644 --- a/vendor/github.com/sethvargo/go-fastly/header.go +++ b/vendor/github.com/sethvargo/go-fastly/header.go @@ -119,7 +119,7 @@ type CreateHeaderInput struct { Name string `form:"name,omitempty"` Action HeaderAction `form:"action,omitempty"` - IgnoreIfSet bool `form:"ignore_if_set,omitempty"` + IgnoreIfSet Compatibool `form:"ignore_if_set,omitempty"` Type HeaderType `form:"type,omitempty"` Destination string `form:"dst,omitempty"` Source string `form:"src,omitempty"` @@ -204,7 +204,7 @@ type UpdateHeaderInput struct { NewName string `form:"name,omitempty"` Action HeaderAction `form:"action,omitempty"` - IgnoreIfSet bool `form:"ignore_if_set,omitempty"` + IgnoreIfSet Compatibool `form:"ignore_if_set,omitempty"` Type HeaderType `form:"type,omitempty"` Destination string `form:"dst,omitempty"` Source string `form:"src,omitempty"` diff --git a/vendor/github.com/sethvargo/go-fastly/s3.go b/vendor/github.com/sethvargo/go-fastly/s3.go index 96d848825866..278fec1210a6 100644 --- a/vendor/github.com/sethvargo/go-fastly/s3.go +++ b/vendor/github.com/sethvargo/go-fastly/s3.go @@ -6,25 +6,33 @@ import ( "time" ) +type S3Redundancy string + +const ( + S3RedundancyStandard S3Redundancy = "standard" + S3RedundancyReduced S3Redundancy = "reduced_redundancy" +) + // S3 represents a S3 response from the Fastly API. type S3 struct { ServiceID string `mapstructure:"service_id"` Version string `mapstructure:"version"` - Name string `mapstructure:"name"` - BucketName string `mapstructure:"bucket_name"` - Domain string `mapstructure:"domain"` - AccessKey string `mapstructure:"access_key"` - SecretKey string `mapstructure:"secret_key"` - Path string `mapstructure:"path"` - Period uint `mapstructure:"period"` - GzipLevel uint `mapstructure:"gzip_level"` - Format string `mapstructure:"format"` - ResponseCondition string `mapstructure:"response_condition"` - TimestampFormat string `mapstructure:"timestamp_format"` - CreatedAt *time.Time `mapstructure:"created_at"` - UpdatedAt *time.Time `mapstructure:"updated_at"` - DeletedAt *time.Time `mapstructure:"deleted_at"` + Name string `mapstructure:"name"` + BucketName string `mapstructure:"bucket_name"` + Domain string `mapstructure:"domain"` + AccessKey string `mapstructure:"access_key"` + SecretKey string `mapstructure:"secret_key"` + Path string `mapstructure:"path"` + Period uint `mapstructure:"period"` + GzipLevel uint `mapstructure:"gzip_level"` + Format string `mapstructure:"format"` + ResponseCondition string `mapstructure:"response_condition"` + TimestampFormat string `mapstructure:"timestamp_format"` + Redundancy S3Redundancy `mapstructure:"redundancy"` + CreatedAt *time.Time `mapstructure:"created_at"` + UpdatedAt *time.Time `mapstructure:"updated_at"` + DeletedAt *time.Time `mapstructure:"deleted_at"` } // s3sByName is a sortable list of S3s. @@ -77,17 +85,18 @@ type CreateS3Input struct { Service string Version string - Name string `form:"name,omitempty"` - BucketName string `form:"bucket_name,omitempty"` - Domain string `form:"domain,omitempty"` - AccessKey string `form:"access_key,omitempty"` - SecretKey string `form:"secret_key,omitempty"` - Path string `form:"path,omitempty"` - Period uint `form:"period,omitempty"` - GzipLevel uint `form:"gzip_level,omitempty"` - Format string `form:"format,omitempty"` - ResponseCondition string `form:"response_condition,omitempty"` - TimestampFormat string `form:"timestamp_format,omitempty"` + Name string `form:"name,omitempty"` + BucketName string `form:"bucket_name,omitempty"` + Domain string `form:"domain,omitempty"` + AccessKey string `form:"access_key,omitempty"` + SecretKey string `form:"secret_key,omitempty"` + Path string `form:"path,omitempty"` + Period uint `form:"period,omitempty"` + GzipLevel uint `form:"gzip_level,omitempty"` + Format string `form:"format,omitempty"` + ResponseCondition string `form:"response_condition,omitempty"` + TimestampFormat string `form:"timestamp_format,omitempty"` + Redundancy S3Redundancy `form:"redundancy,omitempty"` } // CreateS3 creates a new Fastly S3. @@ -161,17 +170,18 @@ type UpdateS3Input struct { // Name is the name of the S3 to update. Name string - NewName string `form:"name,omitempty"` - BucketName string `form:"bucket_name,omitempty"` - Domain string `form:"domain,omitempty"` - AccessKey string `form:"access_key,omitempty"` - SecretKey string `form:"secret_key,omitempty"` - Path string `form:"path,omitempty"` - Period uint `form:"period,omitempty"` - GzipLevel uint `form:"gzip_level,omitempty"` - Format string `form:"format,omitempty"` - ResponseCondition string `form:"response_condition,omitempty"` - TimestampFormat string `form:"timestamp_format,omitempty"` + NewName string `form:"name,omitempty"` + BucketName string `form:"bucket_name,omitempty"` + Domain string `form:"domain,omitempty"` + AccessKey string `form:"access_key,omitempty"` + SecretKey string `form:"secret_key,omitempty"` + Path string `form:"path,omitempty"` + Period uint `form:"period,omitempty"` + GzipLevel uint `form:"gzip_level,omitempty"` + Format string `form:"format,omitempty"` + ResponseCondition string `form:"response_condition,omitempty"` + TimestampFormat string `form:"timestamp_format,omitempty"` + Redundancy S3Redundancy `form:"redundancy,omitempty"` } // UpdateS3 updates a specific S3. diff --git a/vendor/github.com/sethvargo/go-fastly/version.go b/vendor/github.com/sethvargo/go-fastly/version.go index fd22f90674de..8b54c9ceea53 100644 --- a/vendor/github.com/sethvargo/go-fastly/version.go +++ b/vendor/github.com/sethvargo/go-fastly/version.go @@ -7,14 +7,14 @@ import ( // Version represents a distinct configuration version. type Version struct { - Number string `mapstructure:"number"` - Comment string `mapstructure:"comment"` - ServiceID string `mapstructure:"service_id"` - Active bool `mapstructure:"active"` - Locked bool `mapstructure:"locked"` - Deployed bool `mapstructure:"deployed"` - Staging bool `mapstructure:"staging"` - Testing bool `mapstructure:"testing"` + Number string `mapstructure:"number"` + Comment string `mapstructure:"comment"` + ServiceID string `mapstructure:"service_id"` + Active bool `mapstructure:"active"` + Locked bool `mapstructure:"locked"` + Deployed bool `mapstructure:"deployed"` + Staging bool `mapstructure:"staging"` + Testing bool `mapstructure:"testing"` } // versionsByNumber is a sortable list of versions. This is used by the version diff --git a/vendor/vendor.json b/vendor/vendor.json index 76f97080bc99..ac0edde9b80b 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -250,8 +250,10 @@ "revision": "9b82b0372a4edf52f66fbc8feaa6aafe0123001d" }, { + "checksumSHA1": "csR8njyJfkweB0RCtfnLwgXNeqQ=", "path": "github.com/ajg/form", - "revision": "c9e1c3ae1f869d211cdaa085d23c6af2f5f83866" + "revision": "7ff89c75808766205bfa4411abb436c98c33eb5e", + "revisionTime": "2016-06-29T21:43:12Z" }, { "path": "github.com/apparentlymart/go-cidr/cidr", @@ -920,9 +922,11 @@ "revision": "95fa852edca41c06c4ce526af4bb7dec4eaad434" }, { + "checksumSHA1": "EWGfo74RcoKaYFZNSkvzYRJMgrY=", "comment": "20141209094003-92-g95fa852", "path": "github.com/hashicorp/atlas-go/v1", - "revision": "95fa852edca41c06c4ce526af4bb7dec4eaad434" + "revision": "c8b26aa95f096efc0f378b2d2830ca909631d584", + "revisionTime": "2016-07-22T13:58:36Z" }, { "comment": "v0.6.3-28-g3215b87", @@ -1615,8 +1619,10 @@ "revision": "d41af8bb6a7704f00bc3b7cba9355ae6a5a80048" }, { + "checksumSHA1": "DWJoWDXcRi4HUCyxU6dLVVjR4pI=", "path": "github.com/sethvargo/go-fastly", - "revision": "6566b161e807516f4a45bc3054eac291a120e217" + "revision": "b0a18d43769d55365d4fbd9ba36493e5c0dcd8f5", + "revisionTime": "2016-07-08T18:18:56Z" }, { "comment": "v1.1-2-g5578a8c", diff --git a/website/source/docs/providers/aws/d/ecs_container_definition.html.markdown b/website/source/docs/providers/aws/d/ecs_container_definition.html.markdown new file mode 100644 index 000000000000..6badc81dd34b --- /dev/null +++ b/website/source/docs/providers/aws/d/ecs_container_definition.html.markdown @@ -0,0 +1,40 @@ +--- +layout: "aws" +page_title: "AWS: aws_ecs_container_definition" +sidebar_current: "docs-aws-datasource-ecs-container-definition" +description: |- + Provides details about a single container within an ecs task definition +--- + +# aws\_ecs\_container\_definition + +The ECS container definition data source allows access to details of +a specific container within an AWS ECS service. + +## Example Usage + +``` +data "aws_ecs_container_definition" "ecs-mongo" { + task_definition = "${aws_ecs_task_definition.mongo.id}" + container_name = "mongodb" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `task_definition` - (Required) The ARN of the task definition which contains the container +* `container_name` - (Required) The name of the container definition + +## Attributes Reference + +The following attributes are exported: + +* `image` - The docker image in use, including the digest +* `image_digest` - The digest of the docker image in use +* `cpu` - The CPU limit for this container definition +* `memory` - The memory limit for this container definition +* `environment` - The environment in use +* `disable_networking` - Indicator if networking is disabled +* `docker_labels` - Set docker labels diff --git a/website/source/docs/providers/aws/r/ami.html.markdown b/website/source/docs/providers/aws/r/ami.html.markdown index 25ac04db62ad..1a579407e102 100644 --- a/website/source/docs/providers/aws/r/ami.html.markdown +++ b/website/source/docs/providers/aws/r/ami.html.markdown @@ -14,6 +14,9 @@ The AMI resource allows the creation and management of a completely-custom If you just want to duplicate an existing AMI, possibly copying it to another region, it's better to use `aws_ami_copy` instead. +If you just want to share an existing AMI with another AWS account, +it's better to use `aws_ami_launch_permission` instead. + ## Example Usage ``` diff --git a/website/source/docs/providers/aws/r/ami_launch_permission.html.markdown b/website/source/docs/providers/aws/r/ami_launch_permission.html.markdown new file mode 100644 index 000000000000..7be0ffd59981 --- /dev/null +++ b/website/source/docs/providers/aws/r/ami_launch_permission.html.markdown @@ -0,0 +1,33 @@ +--- +layout: "aws" +page_title: "AWS: aws_ami_launch_permission" +sidebar_current: "docs-aws-resource-ami-launch-permission" +description: |- + Adds launch permission to Amazon Machine Image (AMI). +--- + +# aws\_ami\_launch\_permission + +Adds launch permission to Amazon Machine Image (AMI) from another AWS account. + +## Example Usage + +``` +resource "aws_ami_launch_permission" "example" { + image_id = "ami-12345678" + account_id = "123456789012" +} +``` + +## Argument Reference + +The following arguments are supported: + + * `image_id` - (required) A region-unique name for the AMI. + * `account_id` - (required) An AWS Account ID to add launch permissions. + +## Attributes Reference + +The following attributes are exported: + + * `id` - A combination of "`image_id`-`account_id`". diff --git a/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown b/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown index d5492517f236..3ab996ac8d07 100644 --- a/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown +++ b/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown @@ -51,10 +51,14 @@ The following arguments are supported: off of. Example stacks can be found in the [Amazon API documentation][1] * `template_name` – (Optional) The name of the Elastic Beanstalk Configuration template to use in deployment -* `wait_for_ready_timeout` - (Default: "10m") The maximum +* `wait_for_ready_timeout` - (Default: `10m`) The maximum [duration](https://golang.org/pkg/time/#ParseDuration) that Terraform should wait for an Elastic Beanstalk Environment to be in a ready state before timing out. +* `poll_interval` – The time between polling the AWS API to +check if changes have been applied. Use this to adjust the rate of API calls +for any `create` or `update` action. Minimum `10s`, maximum `180s`. Omit this to +use the default behavior, which is an exponential backoff * `tags` – (Optional) A set of tags to apply to the Environment. **Note:** at this time the Elastic Beanstalk API does not provide a programatic way of changing these tags after initial application diff --git a/website/source/docs/providers/aws/r/iam_user.html.markdown b/website/source/docs/providers/aws/r/iam_user.html.markdown index acc79b749e01..3ec681cd6fd0 100644 --- a/website/source/docs/providers/aws/r/iam_user.html.markdown +++ b/website/source/docs/providers/aws/r/iam_user.html.markdown @@ -48,6 +48,9 @@ The following arguments are supported: * `name` - (Required) The user's name. * `path` - (Optional, default "/") Path in which to create the user. +* `force_destroy` - (Optional, default false) When destroying this user, destroy + even if it has non-Terraform-managed IAM access keys. Without `force_destroy` + a user with non-Terraform-managed access keys will fail to be destroyed. ## Attributes Reference @@ -65,4 +68,4 @@ IAM Users can be imported using the `name`, e.g. ``` $ terraform import aws_iam_user.lb loadbalancer -``` \ No newline at end of file +``` diff --git a/website/source/docs/providers/aws/r/kinesis_stream.html.markdown b/website/source/docs/providers/aws/r/kinesis_stream.html.markdown index 90220bffb84c..1ae13bfcfb2d 100644 --- a/website/source/docs/providers/aws/r/kinesis_stream.html.markdown +++ b/website/source/docs/providers/aws/r/kinesis_stream.html.markdown @@ -20,6 +20,10 @@ resource "aws_kinesis_stream" "test_stream" { name = "terraform-kinesis-test" shard_count = 1 retention_period = 48 + shard_level_metrics = [ + "IncomingBytes", + "OutgoingBytes" + ] tags { Environment = "test" } @@ -36,6 +40,7 @@ AWS account and region the Stream is created in. Amazon has guidlines for specifying the Stream size that should be referenced when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more. * `retention_period` - (Optional) Length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours. Minimum value is 24. Default is 24. +* `shard_level_metrics` - (Optional) A list of shard-level CloudWatch metrics which can be enabled for the stream. See [Monitoring with CloudWatch][3] for more. Note that the value ALL should not be used; instead you should provide an explicit list of metrics you wish to enable. * `tags` - (Optional) A mapping of tags to assign to the resource. ## Attributes Reference @@ -48,3 +53,4 @@ when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more. [1]: https://aws.amazon.com/documentation/kinesis/ [2]: https://docs.aws.amazon.com/kinesis/latest/dev/amazon-kinesis-streams.html +[3]: https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-cloudwatch.html diff --git a/website/source/docs/providers/aws/r/rds_cluster.html.markdown b/website/source/docs/providers/aws/r/rds_cluster.html.markdown index 7229a926c8cd..90e373eac239 100644 --- a/website/source/docs/providers/aws/r/rds_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/rds_cluster.html.markdown @@ -19,11 +19,11 @@ Changes to a RDS Cluster can occur when you manually change a parameter, such as `port`, and are reflected in the next maintenance window. Because of this, Terraform may report a difference in it's planning phase because a modification has not yet taken place. You can use the -`apply_immediately` flag to instruct the service to apply the change immediately -(see documentation below). +`apply_immediately` flag to instruct the service to apply the change immediately +(see documentation below). -~> **Note:** using `apply_immediately` can result in a -brief downtime as the server reboots. See the AWS Docs on [RDS Maintenance][4] +~> **Note:** using `apply_immediately` can result in a +brief downtime as the server reboots. See the AWS Docs on [RDS Maintenance][4] for more information. ## Example Usage @@ -66,7 +66,7 @@ string. instances in the DB cluster can be created in * `backup_retention_period` - (Optional) The days to retain backups for. Default 1 -* `preferred_backup_window` - (Optional) The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter. +* `preferred_backup_window` - (Optional) The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter. Default: A 30-minute window selected at random from an 8-hour block of time per region. e.g. 04:00-09:00 * `preferred_maintenance_window` - (Optional) The weekly time range during which system maintenance can occur, in (UTC) e.g. wed:04:00-wed:04:30 * `port` - (Optional) The port on which the DB accepts connections @@ -88,13 +88,12 @@ The following attributes are exported: * `id` - The RDS Cluster Identifier * `cluster_identifier` - The RDS Cluster Identifier * `cluster_members` – List of RDS Instances that are a part of this cluster -* `address` - The address of the RDS instance. * `allocated_storage` - The amount of allocated storage * `availability_zones` - The availability zone of the instance * `backup_retention_period` - The backup retention period * `preferred_backup_window` - The backup window * `preferred_maintenance_window` - The maintenance window -* `endpoint` - The primary, writeable connection endpoint +* `endpoint` - The DNS address of the RDS instance * `engine` - The database engine * `engine_version` - The database engine version * `maintenance_window` - The instance maintenance window @@ -113,8 +112,8 @@ The following attributes are exported: ## Import -RDS Clusters can be imported using the `cluster_identifier`, e.g. +RDS Clusters can be imported using the `cluster_identifier`, e.g. ``` $ terraform import aws_rds_cluster.aurora_cluster aurora-prod-cluster -``` \ No newline at end of file +``` diff --git a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown index 804a11458312..53df2a3c1050 100644 --- a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown +++ b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown @@ -80,7 +80,7 @@ The following attributes are exported: this instance is a read replica * `allocated_storage` - The amount of allocated storage * `availability_zones` - The availability zone of the instance -* `endpoint` - The IP address for this instance. May not be writable +* `endpoint` - The DNS address for this instance. May not be writable * `engine` - The database engine * `engine_version` - The database engine version * `database_name` - The database name @@ -95,8 +95,8 @@ this instance is a read replica ## Import -Redshift Cluster Instances can be imported using the `identifier`, e.g. +Redshift Cluster Instances can be imported using the `identifier`, e.g. ``` $ terraform import aws_rds_cluster_instance.prod_instance_1 aurora-cluster-instance-1 -``` \ No newline at end of file +``` diff --git a/website/source/docs/providers/azure/index.html.markdown b/website/source/docs/providers/azure/index.html.markdown index d7e6dcda43c6..fd063556f41f 100644 --- a/website/source/docs/providers/azure/index.html.markdown +++ b/website/source/docs/providers/azure/index.html.markdown @@ -1,14 +1,16 @@ --- layout: "azure" -page_title: "Provider: Azure" +page_title: "Provider: Azure Service Management" sidebar_current: "docs-azure-index" description: |- The Azure provider is used to interact with the many resources supported by Azure. The provider needs to be configured with a publish settings file and optionally a subscription ID before it can be used. --- -# Azure Provider +# Azure Service Management Provider -The Azure provider is used to interact with the many resources supported +[arm]: /docs/providers/azurerm/index.html + +The Azure Service Management provider is used to interact with the many resources supported by Azure. The provider needs to be configured with a [publish settings file](https://manage.windowsazure.com/publishsettings) and optionally a subscription ID before it can be used. diff --git a/website/source/docs/providers/azurerm/index.html.markdown b/website/source/docs/providers/azurerm/index.html.markdown index d669d1b80aae..e1f74142abdd 100644 --- a/website/source/docs/providers/azurerm/index.html.markdown +++ b/website/source/docs/providers/azurerm/index.html.markdown @@ -6,19 +6,22 @@ description: |- The Azure Resource Manager provider is used to interact with the many resources supported by Azure, via the ARM API. This supercedes the Azure provider, which interacts with Azure using the Service Management API. The provider needs to be configured with a credentials file, or credentials needed to generate OAuth tokens for the ARM API. --- -# Azure Resource Manager Provider +# Microsoft Azure Provider -The Azure Resource Manager provider is used to interact with the many resources -supported by Azure, via the ARM API. This supercedes the Azure provider, which -interacts with Azure using the Service Management API. The provider needs to be -configured with the credentials needed to generate OAuth tokens for the ARM API. +The Microsoft Azure provider is used to interact with the many +resources supported by Azure, via the ARM API. This supercedes the [legacy Azure +provider][asm], which interacts with Azure using the Service Management API. The +provider needs to be configured with the credentials needed to generate OAuth +tokens for the ARM API. + +[asm]: /docs/providers/azure/index.html Use the navigation to the left to read about the available resources. ## Example Usage ``` -# Configure the Azure Resource Manager Provider +# Configure the Microsoft Azure Provider provider "azurerm" { subscription_id = "..." client_id = "..." diff --git a/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown b/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown index 0704de4e04cd..f9d0f231ec7f 100644 --- a/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown +++ b/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown @@ -32,18 +32,6 @@ resource "azurerm_subnet" "test" { address_prefix = "10.0.2.0/24" } -resource "azurerm_network_interface" "test" { - name = "acctni" - location = "West US" - resource_group_name = "${azurerm_resource_group.test.name}" - - ip_configuration { - name = "testconfiguration1" - subnet_id = "${azurerm_subnet.test.id}" - private_ip_address_allocation = "dynamic" - } -} - resource "azurerm_storage_account" "test" { name = "accsa" resource_group_name = "${azurerm_resource_group.test.name}" @@ -74,13 +62,13 @@ resource "azurerm_virtual_machine_scale_set" "test" { capacity = 2 } - virtual_machine_os_profile { + os_profile { computer_name_prefix = "testvm" admin_username = "myadmin" admin_password = "Passwword1234" } - virtual_machine_os_profile_linux_config { + os_profile_linux_config { disable_password_authentication = true ssh_keys { path = "/home/myadmin/.ssh/authorized_keys" @@ -88,7 +76,7 @@ resource "azurerm_virtual_machine_scale_set" "test" { } } - virtual_machine_network_profile { + network_profile { name = "TestNetworkProfile" primary = true ip_configuration { @@ -97,14 +85,14 @@ resource "azurerm_virtual_machine_scale_set" "test" { } } - virtual_machine_storage_profile_os_disk { + storage_profile_os_disk { name = "osDiskProfile" caching = "ReadWrite" create_option = "FromImage" vhd_containers = ["${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}"] } - virtual_machine_storage_profile_image_reference { + storage_profile_image_reference { publisher = "Canonical" offer = "UbuntuServer" sku = "14.04.2-LTS" diff --git a/website/source/intro/examples/consul.html.markdown b/website/source/intro/examples/consul.html.markdown index 47c896d36af4..28b6acf05e6d 100644 --- a/website/source/intro/examples/consul.html.markdown +++ b/website/source/intro/examples/consul.html.markdown @@ -25,14 +25,14 @@ and will default to "m1.small" if that key does not exist. Once the instance is the "tf\_test/id" and "tf\_test/public\_dns" keys will be set with the computed values for the instance. -Before we run the example, use the [Web UI](http://demo.consul.io/ui/#/nyc1/kv/) +Before we run the example, use the [Web UI](http://demo.consul.io/ui/#/nyc3/kv/) to set the "tf\_test/size" key to "t1.micro". Once that is done, copy the configuration into a configuration file ("consul.tf" works fine). Either provide the AWS credentials as a default value in the configuration or invoke `apply` with the appropriate variables set. Once the `apply` has completed, we can see the keys in Consul by -visiting the [Web UI](http://demo.consul.io/ui/#/nyc1/kv/). We can see +visiting the [Web UI](http://demo.consul.io/ui/#/nyc3/kv/). We can see that the "tf\_test/id" and "tf\_test/public\_dns" values have been set. diff --git a/website/source/layouts/aws.erb b/website/source/layouts/aws.erb index a2529972c54c..8ef7ebbf0913 100644 --- a/website/source/layouts/aws.erb +++ b/website/source/layouts/aws.erb @@ -16,6 +16,9 @@ > aws_ami + > + aws_ecs_container_definition + > aws_availability_zones diff --git a/website/source/layouts/azure.erb b/website/source/layouts/azure.erb index 47356cba4180..30db8c68521b 100644 --- a/website/source/layouts/azure.erb +++ b/website/source/layouts/azure.erb @@ -7,7 +7,7 @@ > - Azure Provider + Azure Service Management Provider > @@ -78,9 +78,21 @@ + +
  • + Microsoft Azure Provider » +
  • <% end %> + + <%= yield %> <% end %> diff --git a/website/source/layouts/azurerm.erb b/website/source/layouts/azurerm.erb index 9d0f781e4432..089fb3dd1eea 100644 --- a/website/source/layouts/azurerm.erb +++ b/website/source/layouts/azurerm.erb @@ -8,7 +8,7 @@ > - Azure Resource Manager Provider + Microsoft Azure Provider > @@ -198,6 +198,9 @@ +
  • + Azure Service Management Provider » +
  • <% end %> diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index 287ff035098b..20b0b578cf52 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -170,14 +170,6 @@ AWS - > - Azure (Service Management) - - - > - Azure (Resource Manager) - - > Chef @@ -262,6 +254,14 @@ Mailgun + > + Microsoft Azure + + + > + Microsoft Azure (Legacy ASM) + + > MySQL